## [GSoC’18|OrbitDeterminator|Aakash] Week #7-8 : Propagation Model

`About the Author: Hello guys, I am Aakash Deep. I am an undergraduate student from Indraprastha Institute of Information Technology, Delhi (IIIT-D) in Computer Science and Engineering (CSE). I am a speed-cuber and, my hobby is solving Rubik's Cube. I am also founder of Rubik's Cube Club at IIIT-D.`

The post is in the continuation of the series of posts from the past. To read those blogs use these links for blog1, blog2 and blog3.

## Highlights

• The database module was added earlier to create and maintain/update the database. For the database, we used MySQL.
• The Gibb’s Method was implemented and OOP concepts were used for the ease of use and better maintenance.
• Propagation Model was implemented and the output was making some sense.

## Database Module

The database module contains two scripts: initDatabase.py and scraper.py. MySQL was used for the database so the database was created and updated locally on the system. Afterwards, this started creating the problem in the build process of the whole module. As the database was stored locally in the system and during the build process, the module tried to connect to the database. Due to this reason, the build process was failing. Therefore, the database was currently removed. We will add the database module later after looking upon it.

## Propagation Model

At first, the code was sort of hard coded as it reads the TLE from the celestrak website. It was scraping the data from the website then using the TLE for state vector computation for 8 hours and creating a file and then storing the calculated data into it. Problem with this is that we can not use it for our own data.

Now, the code is not scarping the data from the celestrak website anymore which makes it more flexible as we can feed any TLE as input. There is no more file handling in the code. Hence, the code is no more storing the data in a file instead it is giving the final output as a vector containing all the state vectors which are more usable as an output.

As all the functions are inside a class. It can be easily called by creating an object of the class. Now, you just need to call the function using its class object and passing both lines of TLE  as input which means it can be accessed from anywhere. As output, it will return a final vector which will contain all the state vectors for the 8 hours.

The gibbsMethod.py output is changed in the same format. Now, a single vector is containing all the state vectors and then it is returned as the output of the function.

## Extras

Tests for the sgp4.py and gibbsMethod.py were written in test_sgp4.py and test_gibbsMethod.py respectively. Documentation of every function is there in the code as docstrings.

## [GSoC2018|Orbitdeterminator|Arya] Week #8-9: Implementing Kalman Filter

After the implementation of the Cowell propagator, it was time for the Kalman Filter. It is an algorithm that uses a series of measurements observed over time, containing statistical noise and other inaccuracies, and produces estimates of unknown variables that tend to be more accurate than those based on a single measurement alone. A Kalman Filter is the ideal filter for a linear process having Gaussian noise. Since our process is not linear, a modified version of the filter must be used. This is called the Extended Kalman Filter. It linearizes about the point in question and applies the regular Kalman Filter to it.

To apply the filter, the following things are required:

• A function f which outputs the next state, given the current state.
• The matrix F which is the jacobian of f .
• The matrix H which converts state into corresponding observations.
• A matrix Q denoting process noise. It consists of the inaccuracies in the model.
• A matrix R denoting measurement noise.

The variables involved during the calculations are:

• x – the state of the system
• z – the observations of the system
• P – covariance matrix of x. It denotes the inaccuracy in the estimate. As the filter runs, P is supposed to decrease.
• K – the Kalman gain. This matrix is between 0 and I and denotes how much the observations can be trusted vs how much the model can be trusted.

The filter consists of two major steps:

• Predict – in this step, we predict the next state (x) and covariance matrix (P) using our model.
• Update – in this step, we update our estimates of x and P using observations (z).

The equations of the filter are:

Predict:

\begin{align*} \hat {\mathbf x}_{k|k-1} &= f(\hat {\mathbf x}_{k-1|k-1}) \\ {\mathbf P}_{k|k-1} &= {\mathbf F}_k{\mathbf P}_{k-1|k-1}{\mathbf F}^{\mathsf T} + {\mathbf Q}_k \end{align*}

Update:

\begin{align*} \hat {\mathbf y}_k &= \hat {\mathbf z}_k - h(\hat {\mathbf x}_{k|k-1}) \\ {\mathbf K}_k &= {\mathbf P}_{k|k-1}{\mathbf H}_k^{\mathsf T} \left ({\mathbf H}_k{\mathbf P}_{k|k-1}{\mathbf H}_k^{\mathsf T} + {\mathbf R}_k \right)^{-1} \\ \hat {\mathbf x}_{k|k} &= \hat {\mathbf x}_{k|k-1} + {\mathbf K}_k\hat {\mathbf y}_k \\ {\mathbf P}_{k|k} &= \left(\mathbf I - {\mathbf K}_k{\mathbf H}_k \right ){\mathbf P}_{k|k-1} \end{align*}

The filter has to be initialized with a value of x and P. If the initial value of P is not known, it can be set to an arbitrary large value.

### My implementation

Currently there is a doubt about whether the DGSN will be able to supply complete velocity information at any time. Hence, I left out velocity from the computation and only focused on the position. Therefore x is the position vector. f in this case is `rk4(_,t1,t2)`. To find F I numerically found out the Jacobian of f with a step size of 0.0005 km. I set Q and R as diagonal matrices with constant values of 10 km and 30 km respectively. Since we are directly getting the coordinates from DGSN, h is the identity function and H is the identity matrix H. I initialized P as a diagonal matrix with large values. Then I ran the filter and after it converged it started giving out smoother values.

### Possible simplifications

I observed that for short intervals of time, F is approximately equal to the identity matrix. Since computing the Jacobian is expensive, we can set F equal to I for short periods. For long periods since observations are to be trusted more, we can set x to z directly. I also observed that P is approximately equal to a scalar times I. Making these simplifications will transform the matrix equations into scalar ones:

Predict:

\begin{align*} \hat {\mathbf x}_{k|k-1} &= f(\hat {\mathbf x}_{k-1|k-1}) \\ p_{k|k-1} &= p_{k-1|k-1} + q_k \end{align*}

Update:

\begin{align*} \hat {\mathbf y}_k &= \hat {\mathbf z}_k - \hat {\mathbf x}_{k|k-1} \\ k_k &= p_{k|k-1}(p_{k|k-1}+r_k)^{-1} \\ \hat {\mathbf x}_{k|k} &= \hat {\mathbf x}_{k|k-1} + k_k\hat {\mathbf y} \\ p_{k|k} &= (1-k_k)p_{k|k-1} \end{align*}

If q and r are constants then we can further simplify the equations to find the final steady state Kalman Gain k and covariance p:

\begin{align*} p_{\text{steady}} &= \frac{-q+\sqrt{q^2+4qr}}{2} \\ k_{\text{steady}} &= \frac{q+\sqrt{q^2+4qr}}{2r+q+\sqrt{q^2+4qr}} \end{align*}

The simplified equations become:

Predict:

$\hat {\mathbf x}_{k|k-1} = f(\hat {\mathbf x}_{k-1|k-1})$

Update:

$\hat {\mathbf x}_{k|k} = \hat {\mathbf x}_{k|k-1} + k_{\text {steady}} (\hat {\mathbf z}_k - \hat {\mathbf x}_{k|k-1})$

### Possible improvements

Of course, the simplifications highlighted above are only possible because our covariance matrices are too simple. Some possible improvements are:

• Use time varying Q and R matrices. After a long interval of no measurements, the uncertainity in position is higher. So Q should be larger, and vice-versa.
• Try to include velocity into the state. Right now, we are not sure whether the DGSN can measure all the 3 components of velocity at an instant, but we know that it will be able to measure one component using Doppler shift. Thus we can try to incorporate it into the filter using a time-varying R matrix.

## [GSoC2018|OrbitDeterminator|Arya] Week #7-8: Implementing Cowell propagator

Last time when we tried to use the SGP4 propagator in our Kalman Filter, we ran into problems. This was because you can’t stop the algorithm at a certain stage and continue from there. Suppose you want to propagate from $t_1$ to $t_2$, you can directly propagate from $t_1$ to $t_2$ but you can’t propagate it from $t_1$ to $t_3$ to $t_2$ . So I set out to make our own propagator taking into account two major perturbations – oblateness of Earth and atmospheric drag. Instead of making an analytical propagator like SGP4, I went with a numerical one because that can be stopped and started at any point. So the first challenge was making a numerical integrator.

### Numerical Integration

Numerical integration are used to integrate a differential equation from one point to another point. I studied and tested various integrators and finally settled on Runge-Kutta 4th order integrator (RK4).

• Good accuracy for small to medium step sizes
• Low computation cost
• Easy to implement

Let the initial value problem be $\dot y = f(t,y)$ and $y(t_0) = y_0$ .

Choose a step size $h > 0$ and define

\begin{align*} y_{n+1} &= y_n + \frac{1}{6}(k_1+2k_2+2k_3+k_4) \\ t_{n+1} &= t_n + h \end{align*}

for  $n = 0,1,2,3,\dots$ where,

\begin{align*} k_1 &= h\,f\left( t_n,y_n \right) \\ k_2 &= h\,f\left( t_n + \frac{h}{2}, y_n + \frac{k_1}{2} \right) \\ k_3 &= h\,f\left( t_n + \frac{h}{2}, y_n + \frac{k_2}{2} \right) \\ k_4 &= h\,f\left( t_n + h, y_n + k_3 \right) \\ \end{align*}

Note that these equations work equally well whether y is a scalar or a vector. In our case, the differential equation is $\mathbf{\ddot r}=-\frac{\mu}{r^3}\vec{\mathbf r} + \vec{\mathbf a}_p$. It is not in a form suitable to be used directly in RK4. To use it in RK4 we have to vectorize the equation. First, let’s define the state of the system $\mathbf s$ as $[x, y, z, \dot x, \dot y, \dot z]^T$ . Let $r = \sqrt{x^2+y^2+z^2}$ . Then,

This form can be directly used in the RK4 integrator.

### Modelling oblateness of the Earth

Spherical harmonics is used to model the gravity field around the Earth. The method is similar to how Taylor series works. Just like Taylor series, there are various coefficients for each degree of a term in the expansion. The most important coefficient is J2. The next non-zero coefficient J4 is 1000 times smaller than J2. Considering only J2, the gravitational potential around Earth is:

$U(r,\phi) = -\frac{\mu}{r}+\frac{1}{2} \frac{\mu}{r^3} J_2 R_{\oplus}^2(3\cos^2\phi -1)$

where,
$r$ is the radius from the centre
$\phi$ is the latitude of the point with respect to the equator
$J_2$ is a dimensionless constant equal to $0.001083$
$R_{\oplus}$ is the equatorial radius of the Earth, equal to $6378.1\, \text{km}$

To get the acceleration in xyz axes, find the gradient of U and transform coordinates to cartesian. Doing this, we get:

\begin{align*} a_x &= \frac{3}{2}\frac{J_2 R_{\oplus}^2}{r^5}\mu x\left(5\frac{z^2}{r^2}-1\right ) \\ a_y &= \frac{3}{2}\frac{J_2 R_{\oplus}^2}{r^5}\mu y\left(5\frac{z^2}{r^2}-1\right ) \\ a_z &= \frac{3}{2}\frac{J_2 R_{\oplus}^2}{r^5}\mu z\left(5\frac{z^2}{r^2}-3\right ) \\ \end{align*}

These equations implement J2 perturbation. The main effect of J2 perturbation is precession of the nodes and the argument of perigee.

### Adding drag to the propagator

The simplest model for drag force is $\vec{\mathbf a} = -\frac{1}{2}\frac{C_DA}{m}\rho\,v_{rel}\vec{\mathbf v}_{rel}$ . We have to determine the air density, relative velocity and drag coefficients. For air density I used the model given on this website and adjusted the constants for the current solar cycle. The final air density model is:

$\rho = 0.6\exp\left(-\frac{1}{915}(h-175)(29.4-0.012h)\right) \text{kg}\, {\text km}^{-3}$

where h is the altitude of the satellite above the surface of the Earth.

To find the relative velocity, one must account for the rotation of the Earth. $\vec{\mathbf v}_{surface} = \vec{\mathbf \omega} \times \vec{\mathbf r}$ where $\vec{\mathbf \omega} = 7.29115 \times 10^{-5} \ \hat{\mathbf z} \ \text{rad/s}$ .

$\vec{\mathbf v}_{rel} = \vec{\mathbf v} - \vec{\mathbf v}_{surface}$

After putting together all this, I adjusted the drag coefficient so that satellites decay at approximately the current rate of decay. This finishes the implementation of drag perturbation.

Combining both the perturbations we get: $\ddot{\mathbf r} = -\frac{\mu}{r^3}\vec{\mathbf r} + \vec{\mathbf a}_{\text{oblateness}} + \vec{\mathbf a}_{\text{drag}}$
Putting it in the RK4 integrator completes the propagator.

### Problems faced

I tried the algorithm on a recent TLE of the ISS (shown below).

```ISS (ZARYA)
1 25544U 98067A   18190.66798432  .00001104  00000-0  24004-4 0  9994
2 25544  51.6418 266.6543 0003456 290.0933 212.4518 15.54021918122018```

According to this, it makes 15.54021 revolutions a day. This translates to a time period of 5559.8 secs. However, according to other simulators I found online, the actual period is around 5556 secs. A difference of 4 secs might not sound like much, but over the course of a day it adds up. The Wikipedia page of the ISS lists the time period to be 92.49 mins (5549.4 secs) but also states that it makes 15.54 orbits in a day (5559.8 secs). This is a contradiction. I searched for this issue and tried to correct it, but failed. Right now, the only method is to adjust the semi-major axis by a few kilometres to fix the time period. I will try to correct it later.

### Future Work

• Implementing the Kalman Filter
• Testing against real world satellites

## [GSoC’18|OrbitDeterminator|Aakash] Week #5-6 : Implementation of Gibb’s Method

`About the Author: Hello guys, I am Aakash Deep. I am an undergraduate student from Indraprastha Institute of Information Technology, Delhi (IIIT-D) in Computer Science and Engineering (CSE). I am a speed-cuber and, my hobby is solving Rubik's Cube.`

The post is in the continuation of the series of posts from the past. You can access those post in the following links, blog1 and blog2.

## Highlights

• The database was created for this, a code was written initDatabase.py. The code is responsible for creating a database with all the necessary tables in the database.
• The database is maintained by the script scraper.py. The script updates tables of the database.
• The propagation model is implemented. For this, the SGP4 algorithm is used. The algorithm is computing state vectors (a pair of position vector and velocity vector) from the TLE.

## Week 5 and 6

The Gibb’s method is implemented. It takes three position vectors as input and after computation gives one position vector and one velocity vector (both as combined also known as a state vector) as output. The OOP concepts are used in the implementation of Gibb’s method. The code is in gibbsMethod.py. By using OOP concepts it supports data abstraction and data encapsulation.

For input, a file is used from /example_data. It contains four attributes which are time, x coordinate, y coordinate and z coordinate. These are the position coordinates at that particular time epoch. As the file contains a lot of position vectors (8000 to be precise), a set of consecutive 3 vectors are used at a time for gibb’s method. Then for the next iteration, the first vector is removed and a new third vector is added to the set. As we are computing state vectors from a set of three vectors, it is obvious that the number of output state vectors is two less than the input position vectors. So, a vector is created by taking the previous length into consideration which holds all the output state vectors. The class also contains a function orbital_elements which converts these state vectors into the orbital elements.

The propagation model, SGP4 is also improved. Previously, the output is not making much sense but after revising it now the velocity vector is making sense. The output of the position vector is coming out to be constant for all the output state vector. Working on position vector in order to get a good output.

### Work in Progress

• Documentation
• Testing

## [GSoC2018|DirectDemod|Vinay] Week #7-8: Meteor M2 demodulation (QPSK) and BPSK demodulation

Many modern communication systems are moving away from traditional analog communication methods to digital communication systems. Using a digital system makes the communication more robust, however will make the system complicated. In this blog we will look at the PSK (Phase shift keying) modulation, specifically QPSK and BPSK.

## [GSoC’18|VisMa|Shantanu] Week #06-07: Finding the Unknown

This is GSoC’18 log#03. Here I will cover on what I have done in week#06-07. Link to the previous log.

## Done so far…

I have implemented a simple solver which is capable of solving a given variable in terms of other variables and numbers. Below is a simple example of how the solver works.

# Angles-only orbit determination: implementation of Gauss method

During these weeks, I worked towards implementing Gauss method for preliminary orbit determination using only angle measurements of a celestial body, either for Sun-centered (heliocentric, e.g., asteroids, comets, planets) or Earth-centered (geocentric, e.g. artificial satellites, space debris) orbits. Gauss method is able to take, for a given celestial object, only three pairs of right-ascension and declination observations, as seen from the surface of Earth, and compute an orbit from only those three observations.

In order to implement Gauss method, I followed closely the text from Howard D. Curtis, „Orbital Mechanics for Engineering Students“, specially section 5.10. For more details about the math behind the method, the reader is referred to this book.

## The core idea and implementation

Gauss method may be summarized as follows: if we know the geodetic latitude `phi_deg`, the altitude above the reference ellipsoid `altitude_km`, the flattening of the Earth `f`; and if we have a list of three right ascensions `ra_hrs` and three declinations `dec_deg` associated to an observed celestial object, as well as the local sidereal time associated to those three observations `lst_deg`, and the time elapsed between them `t_sec`, then we are able to compute the cartesian state of the celestial body, referred either to the Sun or to the Earth, depending on the nature of its orbit. Here, we note that each one of `ra_hrs`, `dec_deg`, `lst_deg` and `t_sec` are lists with exactly three elements. Also, we note that the right-ascension and the declination determine the line-of-sight (LOS) of the celestial object with respect to the observer. Thus, we are able to write a function like this:

`def gauss_method(phi_deg, altitude_km, f, ra_hrs, dec_deg, lst_deg, t_sec)`
and the output of this function is the cartesian position `r2 = [x,y,z]` and the cartesian velocity `v2 = [u,v,w]` of the object, at exactly the time of the second observation.

### A crucial detail

The output of Gauss method depends crucially on finding the real, positive roots of an 8-th order univariate polynomial, which is solved by means of Newton’s method in the book; we tested this using `scipy.optimize.newton`, but we actually obtained better numerical stability using `numpy.roots`, which allows to obtain the roots of an univariate polynomial of arbitrary finite order. One drawback of this method is that, in case of obtaining more than one real, positive roots to the polynomial, then one has to choose manually the root which corresponds to the physical motion of the object. But once the adequate root has been chosen, then the Gauss method gives a very good first estimate of the orbit.

### Iterative refinement of Gauss method

Also, the output from the Gauss method may be refined; this involves computing iteratively better and better estimates of the Lagrange functions associated to the cartesian states at the first and third observations, in terms of the cartesian state at the second observation. Again, following the text from the book, we were able to implement the iterative refinement procedure for Gauss method.

## Results

In order to test our code, we followed step-by-step the Examples 5.11 and 5.12 from the „Orbital Mechanics…“ book by H.D. Curtis.
Our results closely follow the numbers cited by the author, although with some small differences, which we attribute to roundoff errors:
Regarding the refinement procedure (Algorithm 5.6), our results show convergence to practically the same values as the ones cited in the book:
In the figure above, we show a comparison of the numbers we get for each of the quoted variables in the book, for each stage of the refinement iterative procedure for Gauss method. Although some small differences appear, eventually after about 15 to 20 iterations, our code shows convergence up to the precision of 64-bit floating point arithmetic. In the figure below, we show (black) the elliptic orbit corresponding to the output of Gauss method, for the observational data given in Example 5.11 from the book. The „Observer 1“, etc., lines correspond to the geocentric position of the observer on the surface of the Earth; „LOS 1“, etc., correspond to the line-of-sight vectors from the observer to the satellite defined by the RA-dec observations. The orange, black and brown lines correspond to the geocentric cartesian states of the satellite at the time of each observation, computed by means of the Gauss method.
As a second test for our implementation of Gauss method, we took the observational data from the Minor Planet Center (MPC) for asteroid 99942 Apophis, which is a Near-Earth asteroid (NEA). Some differences with respect to Earth-centered orbits are: instead of km for length units and seconds for time units, it is better to use au (astronomical units) and SI days as length and time units, respectively. Besides, instead of the mass parameter μ=G*m for the Earth, we must use the Sun’s mass parameter (in au³ day¯² units). But one of the main differences of Sun-centered orbits with respect to the Earth-centered orbits, is that in order to compute the position of an observer on the surface of the Earth with respect to the center of the Sun, we must know in advance the position of the center of the Earth with respecto the center of the Sun. In order to achieve this, the Python package `jplephem`  was used, which allow us to retrieve the cartesian state of the center of the Earth wrt to the center of the Sun using the NASA-JPL DE430 planetary and ephemerides.
In the figure below, we show a processing of around 1,000 observations of Apophis, from various observatories from around the world, and using our implementation, we were able to reconstruct the orbit of the asteroid around the Sun:
Since the orbit of this asteroid suffers perturbations from the planets, as well as non-gravitational perturbations (e.g., Yarkovsky effect), then a direct application of Gauss method gives only a preliminary estimation of its orbit. In order to compute a more accurate orbit, this method has to be combined with other orbit determination methods (e.g., least-squares, etc.).

## [GSoC2018|OrbitDeterminator|Arya] Week #5-6 – Making a DGSN simulator

This week I worked on propagating the satellite state. The NORAD TLEs are supposed to be used with the SGP4 propagator. It takes care of major perturbations like the oblateness of the Earth and atmospheric drag. Right now, Aakash is working on making one for our organization. But until then, I decided to work with the SGP4 module by Brandon Rhodes.

### Working with SGP4

To make the code, I thought in reverse. We wanted to propagate a state vector from one time to another. So our function should look like this:

`def propagate_state(r,v,t1,t2)`

I checked and saw that the module only accepts TLEs as input. So we have to convert `r`,`v` to keplerian elements. Then artificially generate a TLE and input it to the module. The Celestrak website specifically asks us not to do this. But when data is not available, one must make approximations. I followed this approach and made the function successfully. However, I wasn’t satisfied. Artificially generating TLEs (which are strings) is computationally expensive. I wanted a direct way to feed the input. So I studied the source code and made another function that does the job without resorting to strings.

To propagate the state, we need to set the following parameters:

```whichconst, afspc_mode,
inclo, nodeo, ecco, argpo, mo, no,
satnum, ndot, nddot, bstar,
epochyr, epochdays, jdsatepoch, epoch```

The first line has constants which have default values. `whichconst` is the gravity model which is `wgs72` by default. And `afspc_mode` is `False` unless you are dealing with old TLEs.

The second line has paramters that can be set directly from keplerian elements. The elements in order are inclination, longitude of ascending node, eccentricity, argument of periapsis, mean anomaly and number of orbits in a day.

The third line has parameters which are available in TLEs but they are not available to us. I set `satnum`, `ndot` and `nddot` to 0. `satnum` is irrelevant for the calculation. And `ndot` and `nddot` are normally very close to `0`. For the `bstar` term, I couldn’t set it to zero because that would disable drag simulation. So I went through the `b*` terms of many satellites and got an average value of `b* = 0.21109x10-4`. Unless a `b*` term is supplied, this is used for the simulation.

The last line has parameters related to the epoch. These paramters in order are epoch year, day of year, julian date of epoch, and a Python datetime object of the epoch. These can be set by converting the epoch into a datetime object and manipulating it.

After putting together all this, you get the propagator:

### The DGSN Simulator

Until the real DGSN comes online, we must work with a simulator. A live simulator will also allow us to make live predictions of the satellite. I thought that just running the propagator in an infinite loop will do the job. But it turned out to be more difficult. In a real time infinite loop, I can’t pause to take input. So I can’t control it while it’s running (like stopping or resuming it). Therefore I resorted to multi-threading. The main thread is used for controlling the program and calculations are done on another thread.

First, I made a basic simulator. It just prints the current location of any satellite, given it’s state at some previous time. There is a speed setting for development purposes too. The demo below is printing the position of the ISS every second in cartesian coordinates.

Next step was to add noise and gaps. Gaps are periods when no signal is available from the satellite. Adding noise was very easy. A random value between `-15` and `15 kms` is added to every component of the position vector. To add randomness in time, the time period between each observation is randomly decided. Implementing gaps was tougher.

I thought about it and deduced that gaps will be caused when the antenna can’t see the satellite. This will be caused by the rotation of the Earth and the revolution of the satellite. So the gaps will be periodic with time. To implement this, I wrote the following algorithm:

```omega = <frequency>

p = abs(sin(omega * t))
if (p >= <threshold>):
# process the output
# as if the satellite is
# within range
else:
# continue as if the satellite
# is out of range
```

The figure below shows the working of this algorithm:

By changing `omega`, one can control the frequency of the gaps. By changing `threshold`, one can control the duration of the gaps. A lower threshold means smaller gaps and vice-versa.

Putting it all together, I ran the simulator for 2 full orbits and plotted the results on the webapp I made last week. The results look pretty satisfying. 🙂

### Next Steps

In the next couple of weeks, I will be tackling one of the hardest things in my proposal – the Kalman Filter. The Kalman Filter will take predictions from SGP4 and observations from real life/DGSN simulator and combine them to give more precise outputs. To do this first we have to synchronize the two streams of data (model and observations). Then we have to make the actual filter and pass the inputs through it. After that, we have to update the state in such a way that the filter can be used again. That’s a lot of work, so I am looking forward to a very exciting week ahead! 🙂

## [GSoC2018|DirectDemod|Vinay] Week #5-6: NOAA-APT image extraction & AFSK1200 signal decoding

In my last blog we were at a point where we were able to detect the sync locations in NOAA signal. In this blog, we will look at  how the image is extracted and corrected. We will also have a brief look over the AFSK1200 decoder implemented by Andreas Hornig.

## [GSoC2018|VisMa|Shantanu] Week #03-05: Integrating the Integrator

This is GSoC log#02 (view log#01 here). Here I will cover on what I have done in week #03-05 and a gist of what has been accomplished in Phase I coding period.

##### Done so far …
• Created parser.py which handles conversion of tokens to LaTeX or strings and vice-versa.
• Some simplification issues were fixed
• Integrated the differentiation.py and integration.py to the project along with the step-by-step explanation.
• Support for partial differentiation and „partial integration“ was added.
• The code was again refactored to decrease complexity and remove unused modules.
• Added factorize.py module. Can currently factorize polynomials.