Remove first digit from Swiss Coordinates

Quick tip. I’m using data from air pollution sensors provided here. The Koordinaten are of the form 2’710’500 / 1’259’810. These use the Swiss coordinate system. pyproj is the defacto standard for doing coordinate processing, this page also helped.

But I could make it work, those coordinate values were too big.

Eventually I realised one needs to remove the leading 1 and 2 from the two numbers. This is mentioned in the wikipedia article:

In order to nonetheless achieve a clear distinction between the two systems, an additional digit was added to the coordinates of LV95: any East coordinate (E) now starts with a 2, and any North coordinate (N) with a 1. Consequently, LV95 coordinates are given by pairs of 7-digit numbers, whereas LV03 used pairs of 6-digit numbers – for instance the coordinates (2 600 000m E / 1 200 000m N) in LV95 would be expressed as (600 000m E / 200 000m N) in LV03.

https://en.wikipedia.org/wiki/Swiss_coordinate_system

In summary, my code now looks like:

from pyproj import Proj, transform
sites = []
sites.append({'name':'Zurich, Schimmelstrasse (ZH)','E':681942,'N':247245,'height':415})
sites.append({'name':'Zurich, Heubeeribüel (ZH)','E':685126,'N':248460,'height':610})
pWorld = Proj(init='epsg:4326')
pCH = Proj(init='epsg:21781')
for site in sites:
    print(transform(pCH,pWorld, site['E'], site['N']))

Tensorflow and Matrices containing Variables

Recently Pablo, Dennis and I were wondering what the best way to build Tensors with variables inside. I’ve found three ways (that largely mirror the numpy equivalents). Basically just different combinations of stacking, concatting, reshaping and gathering. [related SO question]

import tensorflow as tf
import numpy as np

a = tf.Variable(1.0,dtype=np.float32)
b = tf.Variable(2.0,dtype=np.float32)
with tf.GradientTape() as t:
    #these lines are equivalent:
    M = tf.reshape(tf.gather([a**2,b**2,a**2/2,1],[0,2,3,1]),[2,2])
    M = tf.reshape(tf.stack([a**2,a**2/2,1,b**2]),[2,2])
    M = tf.concat([tf.stack([[a**2,a**2/2]]),tf.stack([[1,b**2]])],0)
    gradients = t.gradient(tf.linalg.det(M),[a,b])
    print(gradients)
[<tf.Tensor: shape=(), dtype=float32, numpy=7.000001>, <tf.Tensor: shape=(), dtype=float32, numpy=4.0000005>]

I thought I’d just add that, one (possibly unwise) default behaviour of the gradient method is, if one were to ask for the derivative of a matrix it will return the derivative of the reduce_sum of the matrix:

with tf.GradientTape() as t:
    M = tf.concat([tf.stack([[a**2,a**2/2]]),tf.stack([[1,b**2]])],0)
    gradients = t.gradient(M,[a,b])
    print(gradients)
[<tf.Tensor: shape=(), dtype=float32, numpy=3.0>, <tf.Tensor: shape=(), dtype=float32, numpy=4.0>]

Which one can see is returning the derivative of the sum of M.