During GSoC 2018, I worked on developing a ra/dec angles-only Keplerian orbit determination implementation for AerospaceResearch.net’s orbitdeterminator project. Our implementation incorporates both elements from the Gauss method for orbit determination, as well as from the least-squares principle. I did this both for Earth-orbiting bodies (satellites, space debris, etc.), and Sun-orbiting bodies (asteroids, comets, centaurs, etc.).

My implementation makes use of the `astropy`

library, whose `SkyCoord`

class provided the adequate abstraction in order to perform the required computations. `astropy`

was also really useful for handling of units, angles, longitudes, etc., as well as providing access to the SPK kernel of JPL ephemerides DE432s, which are at the core of the orbit determination for Sun-orbiting bodies. Also, the `poliastro`

library was useful for the evaluation of Stumpff’s functions in order to perform the refinement stages of the Gauss method. Finally, `scipy`

’s functionality allowed the use of the Levenberg-Marquardt method for the least-squares fit to ra/dec observations.

The main part of the work I did is contained in the following modules:

`orbitdeterminator/kep_determination/gauss_method.py`

`orbitdeterminator/kep_determination/least_squares.py`

The pull request associated to my GSoC project is in GitHub at: aerospaceresearch/orbitdeterminator PR #141. My GitHub handle is @PerezHz . You can also find me on Twitter as @Perez_Hz.

The `orbitdeterminator/kep_determination/gauss_method.py`

module allows the user to input files which contain ra/dec observations of either Earth-orbiting or Sun-orbiting celestial bodies, following specific formats for each case, and allows to compute an orbit from `n>=3`

observations.

If the number of observations is `n==3`

, then the Gauss method is applied to these 3 observations. Otherwise, if `n>3`

, then the Gauss method is executed over `n-2`

consecutive observation triplets. A set of orbital elements is computed for each of these triplets, and an average over the resulting `n-2`

set of elements is returned. A refinement stage has been implemented for the Gauss method such at each iteration of the refinement stage, a better approximation for Lagrange’s f and g functions are computed; which in turn allows to get better estimates for the cartesian state of the observed body.

The right ascension and declination observations topocentric, i.e., they are referred to registered sites either by the MPC or COSPAR. The information about the geocentric position of these sites is contained in registered lists. Both the IOD and MPC formats contain the alphanumeric codes associated to each observation site, so that for each observation it is possible to compute the geocentric position of the observer. This whole process is automatically processed during runtime in the background, so the user does not need to worry about these details.

`gauss_method_sat`

The `gauss_method_sat`

function computes Keplerian elements from ra/dec observations of an Earth-orbiting body. The call signature is the following:

`gauss_method_mpc(filename, bodyname, obs_arr, r2_root_ind_vec=None, refiters=0, plot=True)`

Where `filename`

refers to the name of the file which contains the observations in IOD-format, `bodyname`

is an user-defined name for the observed object, `obs_arr`

is a list/array of integers which define the numbers of the lines in the observation file `filename`

which are going to be processed, `r2_root_ind_vec`

allows the user to select the roots whenever multiple solutions to the Gauss polynomial exist (default behavior is to select always the first root), `refiters`

is the number of iterations for the refinement stage of the Gauss method (default value is `0`

), and `plot`

is a flag which if set to `True`

will plot the orbit of the object around the Earth. More details may be found at the examples and the documentation. The orbital elements are returned, respectively, in kilometers, UTC seconds and degrees, and they are referred to the (equatorial) ICRF frame. More details may be found at the examples and the documentation.

Example output:

`gauss_method_mpc`

The `gauss_method_mpc`

function computes Keplerian elements from ra/dec observations of an Earth-orbiting body. The call signature is practically same as `gauss_method_sat`

(see section 3.1). One of the differences with respect to the Earth-orbiting case is that in this case the orbital elements are returned, respectively, in astronomical units, TDB days and degrees. But the main difference in the internal processing is that in this case, since the orbit is referred to the Sun, then the position of the Earth with respect to the Sun has to be determined at each of the observation times. `gauss_method_mpc`

uses the JPL DE432s, as provided by the `astropy`

library, in order to compute the heliocentric position of the Earth. This also involves a conversion from the UTC to the TDB time scale, which is also handled again using `astropy`

’s time scale conversion functionalities. Finally, a third difference is that Sun-orbiting orbital elements are returned in the mean J2000.0 ecliptic heliocentric frame. More details may be found at the examples and the documentation.

Example output:

The least-squares functionality developed during my GSoC project may be thought of as differential corrections to the orbits computed by the Gauss method. That is, once a preliminary orbit has been determined by the Gauss method, then the best-fit of the observations to a Keplerian orbit in terms of the observed minus computed residuals (the so-called O-C residuals) may be found by a least-squares procedure. It is worthwhile to mention that in the current implementation all the observations have the same weight in the construction of the O-C residuals vector.

`gauss_LS_sat`

Similar to `gauss_method_sat`

in call signature, but after applying the Gauss method to the subset of the observations defined by `obs_arr`

, this function applies a least-squares procedure to the whole set of observations (unless otherwise selected by the user using the optional argument `obs_arr_ls`

; see documentation), returning the Keplerian elements of the best-fit to the observations. Orbital elements are returned in kilometers, UTC seconds and degrees, and they are referred to the (equatorial) ICRF frame. More details may be found in the documentation and the examples.

Example output:

`gauss_LS_mpc`

Similar to `gauss_method_mpc`

in call signature, but after applying the Gauss method to the subset of the observations defined by `obs_arr`

, this function applies a least-squares procedure to the whole set of observations (unless otherwise selected by the user using the optional argument `obs_arr_ls`

; see documentation), returning the Keplerian elements of the best-fit to the observations. Orbital elements are returned in astronomical units, TDB days and degrees, and they are referred to the mean J2000.0 ecliptic heliocentric frame. More details may be found at the examples and the documentation.

Example output:

We wish to thank the satobs.org community, who gave us valuable comments and suggestions, and also provided us with observations of the ISS and other civilian satellites. We also wish to thank the Mathematical Physics group at the University of Pisa, for all the helpful discussions and feedback which contributed to this work.

- Allow the user to select ephemerides from
`astropy`

: currently, the user is not allowed to specify the ephemerides to be used in the orbit determination of Sun-orbiting bodies. - Add perturbations: for Sun-orbiting bodies, add planetary and non-gravitational perturbations; for Earth-orbiting bodies, add oblateness and atmospheric drag to orbit model.
- For IOD-formatted files, allow the use of other „angle subformats“. There are actually 7 ways to codify the angles in IOD format, but currently the code only allows the angle subformat 2. Subformat 2 was chosen because it is the most used in the Satobs community. Some of this subformats involve alt/az measurements, so in a future version the code should know how to handle alt/az measurements in addition to ra/dec measurements. This could be done, e.g., by using the
`(alt, az) -> (ra, dec)`

conversion functions from the`astropy`

library. - Implement orbit determination algorithms for SDP/SGP4 models from ra/dec measurements.
- Let the user choose between two possible values for the signed difference between two angles, instead of always choosing the shortest signed difference by default.
- Add support for new or non-listed observatories in the MPC or the COSPAR lists of registered observatories.
- Take into account the finite light propagation time, as well as atmospheric refraction.
- Add support for radar delay/Doppler observations of Sun-orbiting bodies.
- Add support for non-equal weights in least-squares subroutines.

]]>

**visma – VISualMAth**, an equation solver and visualizer, which aims at showing the step-by-step solution and interactive plots of a given equation.

Deliverables

The major changes which were proposed for the project **visma** during GSoC’18:

- changing the codebase to follow the object-oriented style
- calculus operations like integration, differentiation (including partial diff.)
- solving equations/inequalities and multivariable linear equations
- improved GUI and interactive plots

The following file structure shows which modules were added/modified to the project during the GSoC period:

visma | |__visma (contains all modules) | | | |__calculus (added basic diff. and integr.) | |__functions (converted from dictionary to class based tokens for functions) | |__gui (added elements like qsolver, plotter, settings and modified steps displayer) | |__io (reorganized tokenize and added checks and parser for equations) | |__matrix (added matrix operation, to be integrated with GUI) | |__simplify (reorganized simplify into addsub and muldiv) | |__solvers (added solver for multivariable equation) | |__transform (added modules like factorization and substitution) | |__tests (unit tests for all the above modules using pytest) | | | |__test_calculus.py (constains test cases for calculus) : :.. : |__main.py (modified to accomodate all new GUI elements) |__run.sh (modified to support installing, testing and packaging visma) |__setup.py (for packaging visma)

Below is a real quick demo of some of the features like calculus, solver, plotter etc which were implemented in visma. To experience **visma** and explore all features, go to the install guide or code wiki.

There were many new things I came across while working on this project. I learned about integrating UI elements (using ** PyQt**), unit-testing (using

**Week #01-02: Making ViSMA ‘classy’**– Changed code to follow object-oriented style**Week #03-05: Integrating the Integrator**– Added the calculus modules**Week #06-07: Finding the Unknown**– Implemented solvers**Week #08-09: Develop, Test, Repeat**– Added unit testing for all modules**Week #10-11: Dynamic Simplification and Plotting**– Enhanced the GUI**Week #12-13: Packing Up**– Finishing touches, packaging code, documentation

Though the GSoC’18 period has ended there are a lot of new things which can be added to **visma** **. **Some of the changes and features I (others can too) intend to add in **visma** ** **after GSoC are:

- Make visma accessible from the terminal
- Add support for more functions
- Fix the issues created
- Make a webapp

**Project Source**– https://github.com/aerospaceresearch/visma**My Commits**– https://github.com/aerospaceresearch/visma/commits?author=8hantanu**Final PR**– https://github.com/aerospaceresearch/visma/pull/61**Documentation**– https://github.com/aerospaceresearch/visma/wiki**TODO board**– https://github.com/aerospaceresearch/visma/projects/1**Alternate link**– https://8hantanu.github.io/projects/visma

The organization for which I am working is **AerospaceResearch.net**. It has many interesting projects not only for the under-graduate beginners but for the masters‘ degree students too. We are a small group of enthusiasts who love to solve problems related to space. The benefit of being a part of a small group is that you get to know each other in a very short duration of time. We have regular meetings with our project mentors. The best part is that we can contact our mentors anytime. My mentor is Nilesh Chaturvedi who has guided me throughout the project and is still doing so. Alexandros Kazantzidis is also one of the mentors in the organization who cleared my doubts while building the propagation model.

*Link to the organization repo.*

The title of the project is **Implementing Two-Line Element (TLE) ****Input / Output and using it for ****evaluation**.

Two-Line Element (TLE) is a data format or a way of encoding various parameters of an object orbiting Earth at any particular time point, an epoch. TLEs can describe the trajectories only of Earth-orbiting objects. It is widely used as an input for projecting the future orbital tracks of space objects. A TLE set may include a title line preceding the element data, so each listing may take up three lines in the file. The title is not required, as each data line includes a unique object identifier code. The two lines contain a lot of information about the object and have a length of 69 characters each.

The scripts that were added to the module are database initialization, scraper, parser, Gibb’s method, and propagation model. More details about each element are as follows:

Database contributes a major role in testing and generating the result of a module. In the beginning, we did not have TLE data. So, we decided that we will store TLEs from the celestrak website in the database. Now, it is time to choose which database we are going to use: SQL or NoSQL? Both the databases have their own advantages and disadvantages. After taking care of the needs and the data which we are going to store, we chose SQL based data. The reason behind this choice is that the input is well defined and structured. We only want to store 2 strings (that is, line 1 and line 2 of the TLE) at a particular time epoch. MySQL is used to store the data. The format of the data stored in the database is – timestamp, line 1 of TLE, line 2 of TLE.

For database creation, initialization, and maintenance two python scripts were written:

- initDatabase.py

The script first creates a database in MySQL so that we can create tables to store the TLEs of the satellites. After that, it crawls the celestrak website and extracts the satellite names only, so that we can create tables in our database. But there was a problem in the creation of tables as the satellite names contain special characters (like dashes, spaces and parenthesis) and giving a special character into a MySQL command produced an error. To solve this problem, satellite names were converted into their respective hexadecimal md5 hash value which was an alphanumeric value and hence the problem was solved. So, the names of the tables are now satellite’s md5 hash value. The format of the tables inside the database is,+-----------+---------------+---------------+ | timestamp | line_1_of_tle | line_2_of_tle | +-----------+---------------+---------------+ | t1 | line1 | line2 | | t2 | line1 | line2 | | .. | ... | ... | | .. | ... | ... | | tn | line1 | line2 | +-----------+---------------+---------------+

The flowchart of the script is as follows,

Following is the image of the database containing the tables,

- scraper.py

This script is responsible for updating/adding a new TLE data into the database. The script first creates a connection with the database and then crawls the celestrak website to extract TLE data for every satellite present there. After the data is extracted, it converts the name of the satellite to its md5 hash and adds the TLE data into the respective table through a MySQL query.

The flowchart of the script is as follows,

The table containing TLE data,

** Format of the table is given above*

Currently, the database module is shifted from the organization repository to my own repository (which contains all the codes created during the GSoC) on github as it is producing a build error during the build process. In this build process (which was running on github), the script is creating a connection with the locally stored MySQL database which is then creating an error in it. The database module will be added later using some online database.

The database module can be found here.

See the source code here.

Gibb’s method is based on the fact that two body orbits lie in a plane. It takes three position vectors as its input arguments and generates a pair of position vector and velocity vector, together known as the state vector. The Gibbs method is implemented using the OOP concepts and hence it supports the data abstraction and encapsulation. The method includes a lot of mathematical computation.

The input to the script is in the following format,

timestamp, x-coordinate, y-coordinate, z-coordinate t1, x1, y1, z1 t2, x2, y2, z2 t3, x3, y3, z3 .. .. tn, xn, yn, zn

A pseudo-code and a brief explanation of the script is as follows,

Pseudo-code: Note: Vector is in bold and magnitude is in normal format. e.g - 'a' represents vector and 'a' represents it's magnitude. Input:r1- first position vectorr2- second position vectorr3- third position vector 1. Taker1,r2andr3. 2. CalculateC12=r1xr2,C23=r2xr3andC31=r3xr1. 3. Verify thatû.Â23= 0 (ûis u cap or unit vector u) 4. CalculateN,DandSfrom below equations,N= r1(r2xr3) + r2(r3xr1) + r3(r1xr2)D=r1xr2+r2xr3+r3xr1S=r1(r2 - r3) +r2(r3 - r1) +r3(r1 - r2) 5. Calculatev2from below,v= √(µ/ND) . ((Dxr)/r +S) Now, take r2 as position vector, v2 as velocity vector and both will together form state vector.

A file containing position vectors at various time epoch is passed into the script. The script reads the file and takes a set of three consecutive position vectors in order to apply the Gibbs method on it. The format of the file is – timestamp, x coordinate, y coordinate, z coordinate. After taking the set, the script passes the data into *gibbs()* function (pseudo code of Gibb’s Method is mentioned above). Then the function computes the result and returns the state vector. Next time the loop iterates, the first position vector gets removed from the set and the next position vector from the file gets added as the third position vector and it is then again passed into the function. As the file contains a lot of position vectors and for a set of 3 consecutive vectors, the script is generating a state vector. It is evident that the output generated will be two less than the total number of position vectors in the input file. After taking this point into consideration, a vector is created which stores every state vector into it and returns it as result.

The script contains a function which converts the state vectors into Orbital or Keplerian elements namely semi-major axis, inclination, right ascension of the ascending node, eccentricity, argument of perigee, and mean anomaly which can thus help further in tracking and plotting satellite’s orbit and in a lot of other computation.

Pseudo code for converting state vector into orbital elements is as follows:

Pseudo Code: Note: Vector is in bold and magnitude is in normal format. e.g - 'a' represents vector and 'a' represents it's magnitude. Input:r- position vectorv- velocity vector 1. Calculate the distance, r = √r.r2. Calculate the speed, v = √v.v3. Calculate the radial velocity, vr =r.v/r 4. Calculate the angular momentum and it's magnitude,h=rxvh = √h.h5. Caculate inclination, i = cos–¹(hz/h) | hz : z-axis inh6. Calculate node line and it's magnitude,Ê= [0, 0, 1]N=ÊxhN =√N.N7. Caculate right ascension of the ascending node, _ | cos–¹(Nx/N) if (Ny ≥ 0) Ω = / \ |_ 360° - cos–¹(Nx/N) if (Ny < 0) where, Nx : x-axis inNNy : y-axis inN8. Calculate eccentricity vector,e= (1/µ)[(v² - µ/r)r- r.vr.v] 9. Calculate eccentricity, e =√e.e10. Calculate argument of perigee, _ | cos–¹(N.e/N.e) if (ez ≥ 0) ω = / \ |_ 360° - cos–¹(N.e/N.e) if (ez < 0) where, ez : z-axis ine11. Calculate true anomaly, _ | cos–¹(e.r/e.r) if (vr ≥ 0) θ = / \ |_ 360° - cos–¹(e.r/e.r) if (vr < 0) 12. Calculate semi-major axis, rp = h²/(µ.(1+e)) | µ : 398600.4418 (constant) ra = h²/(µ.(1-e)) a = (rp+ra)/2

See the source code here.

Simplified perturbations models are a set of five mathematical models (SGP, SGP4, SDP4, SGP8 and SDP8) used to calculate orbital state vectors of satellites and space debris relative to the Earth-centered inertial coordinate system. This set of models is often referred to collectively as SGP4 due to the frequency of use of that model particularly with two-line element sets produced by NORAD and NASA.

The propagation model used for the project is SGP4. The model takes a TLE without title line as input arguments. The output generated by the propagation model is a state vector.

The code is taking TLE as input. Then it extracts the necessary elements in *compute_necessary_tle() *from the TLE which helps in the computation of the SGP4. The necessary information includes inclination, right ascension of the ascending node, eccentricity, argument of perigee, mean anomaly, mean motion, and bstar drag term. After that, the propagation model is exected which include a lot of heavy mathematical computation. After the computation gets over, the function return state vector as output.

Now to check whether the code is producing the results correctly, we need to convert the resulted state vector into TLE back. So, one more function is added which recovers the TLE back from state vectors. To validate the result, Spacetrack Report 3 is used which contains a sample TLE to find the state vector at a particular epoch. The propagation model is taking two input arguments t1 (start epoch time) and t2 (stop epoch time). The resulting state vector is computed in the given range [t1, t2] (including both the points). As there is a state vector at every epoch so an output vector is created which stores all the state vectors and returns it as a result.

Now, the problem is that the input is coming in the form of Keplerian elements and the function accepts a TLE. One way to resolve this is to recover the TLE back from the Keplerian elements and then pass TLE as an input and again extract the same information (in *compute_necessary_tle()* function) in order to propagate the model. But since this approach is costly, the other way to resolve the problem is to create another function which takes Keplerian elements as input parameters. As a result, there is no need to transform the same information again and again. This is the reason why *compute_necessary_kep()* was added. Now, the code is accepting the input in both ways: keplerian elements as well as TLE. In other words, execution of the code can either start from *compute_necessary_tle()* or *compute_necessary_kep()* depending on the input. After producing state vectors from t1 to t2, the script is storing it into an array and returns it as an output.

Follow below steps to run the script: >>> obj = SGP4() >>> obj.compute_necessary_kep(<input>) or >>> obj.compute_necessary_tle(<input>) >>> obj.propagate(t1, t2)

Calling *compute_necessary_kep()* or *compute_necessary_tle()* is necessary. As this function extracts the important information from the given input and initializes the class variables which are further used in the propagation model and without which the computation will not be possible. If *compute_necessary_kep()* or compute_necessary_tle() is not called then the interpreter will throw a custom exception, which says,

Error: Call compute_necessary_kep() or compute_necessary_tle() function of the class SGP4 before calling propagate(). Function Declaration: compute_necessary_kep(list, float) Parameter 1:List of keplerian elements (semi-major axis, inclination, ascension, eccentricity, perigee, anomaly) Parameter 2:bstar drag term Returns: NIL compute_necessary_tle(str, str) Parameter 1: First line of the TLE Parameter 2: Second line of the TLE Returns: NIL

So, just remember to call the functions in the correct sequence and you are good to go.

A test file for propagation model is also written which verifies the output of the two functions *propagation_model()* and *recover_tle()*.

- Spacetrack Report No. 3 (Models for Propagation of NORAD Element Sets)
- MATLAB File Exchange SGP4

- Commits: https://github.com/aerospaceresearch/orbitdeterminator/commits?author=aakash525
- Database: https://github.com/aakash525/Orbital-Determinator/tree/master/code/database
- Gibbs Method: https://github.com/aerospaceresearch/orbitdeterminator/blob/master/orbitdeterminator/kep_determination/gibbsMethod.py
- Propagation Model: https://github.com/aerospaceresearch/orbitdeterminator/blob/master/orbitdeterminator/propagation/sgp4.py

Deciding which packaging style to choose was a difficult task. Though there were a number of options available for packaging python apps I couldn’t find one which makes it simple for packaging it for windows. I managed to convert the source into a single executable file using **pyinstaller** but the performance was compromised.

Finally, I went with **PyPI**. A simple *setup.py* was created which contained information about the dependencies to be installed. Installing **visma** is quiet simple now. To install just type:

`$ pip3 install VISualMAth`

And to launch type:

`$ visma`

I also modified the *run* script which can be used by future developers to install, test and package **visma**. Below are the available options in the new *run* script.

```
$ ./run
Enter command arguments with run
./run install - Install all dependencies for visma
./run visma - Open visma GUI
./run test - Run all the tests and generates coverage report
./run test path/to/test_file.py - Runs all tests and shows coverage for given file
./run test syntax - Run syntax test using pylama
./run test modules - Run tests using pytest for all modules
./run test coverage - After running all the tests, open coverage report
./run pack - Generate builds for visma package
./run pack upload - Generate builds and then upload to test.pypi.org
./run pack final - Generate builds and upload final build to pypi.org
./run clean - Clean all cache, reports and builds
```

The *plotter* has been divided into two separate tabs i.e. 2D plot and 3D plot. The appropriate tab will be focused while plotting a given input. Also I have embedded the settings menu into the UI instead of creating a pop-up one. The settings contain options to enable/disable UI elements, change font sizes, change plot’s axis ranges and mesh density etc.

Plotting an equation in different axis ranges helps in visualizing it in a better way. The below demo justifies the previous statement.

The project wiki is yet to be updated. It will contain the user and developer manual. Some more inline comments and docstrings can be added.

Though the GSoC period is coming to an end, there are still a lot of new things I want to implement in **visma**. I am thinking of making **visma** accessible from the terminal itself. A webapp is also on my mind (will implement after adding some more useful features).

Link to project source and to-do board.

Read on my blog.

]]>

- The directDemod module was written from scratch with several modules that will help in easy handling of raw IQ files.
- Implementation of NOAA demodulation and decoding completed with several features such as color image production, map overlay.
- Implementation of Meteor M2 (QPSK) and Funcube (BPSK) demodulators.
- Accurate sync detection in NOAA, Meteor M2 and Funcube.
- Individual module documentation, tutorials and getting started guides.

- Main repository: https://github.com/aerospaceresearch/DirectDemod
- My commits: https://github.com/aerospaceresearch/DirectDemod/commits?author=7andahalf
- Documentation site: http://directdemod.readthedocs.io/en/vinay_dev
- Experiment logs: https://mybinder.org/v2/gh/aerospaceresearch/DirectDemod/Vinay_dev
- Blogs: All blogs by me https://aerospaceresearch.net/?author=12

If you want to use it to decode your raw files, mainly as a product user. Please refer to the getting started guide.

If you want to understand the code and contribute you can do the following:

- To understand generally what the code is doing please read my blogs @ (https://aerospaceresearch.net/?author=12)
- To understand how to use some of the modules please refer to the tutorial files
- To get a deeper understanding and innerworkings of filters, chunking etc. modules read through the experiment logs in the repo.
- Please refer to the documentation to get a complete list of features available in the modules.

- The Funcube sync finding can be better tested and tuned, so that it performs better.
- Add data decoding functionality for Meteor M2 and Funcubes to get the images out of them etc.
- Fine tune the color image production of NOAA images
- Further ideas can be found at https://github.com/aerospaceresearch/DirectDemod/issues

If you have further questions you can always comment on this blog, open an issue in the github or email me directly.

]]>OrbitDeterminator is part of the bigger DGSN project by AerospaceResearch.Net. The Distributed Ground Station Network will be a network of thousands of ground stations across the globe. Together, these ground stations will track satellites cheaper and faster than conventional tracking stations. It will also get ordinary citizens involved in real space missions, so it is a win-win for everybody.

The purpose of OrbitDeterminator is to process the data coming from the DGSN and estimate the orbit of the satellite. It should also be able to predict the satellite’s location at any future time. This is the approximate workflow of the project:

(Note that the project is still under development so this workflow might change in future.)

My mentor was Alexandros Kazantzidis. He alongwith Nilesh Chaturvedi had worked on the preliminary orbit determination part last year. However the results could be improved. So I worked a little on that, but my main contribution was on real time tracking. As a side project I made a web application for one of the modules. I also cleaned the repo by adding a gitignore file and removing all temporary and IDE files.

The project is hosted on GitHub and made entirely in Python. It includes a requirements.txt and setup.py file. So it can be installed easily using pip. The project directory structure is quite simple and intuitive:

├── docs └── orbitdeterminator ├── example_data ├── filters ├── kep_determination ├── propagation ├── test_deploy ├── tests └── util

Observation data is stored in the form of csv files in the following format:

# the data consists of 4 columns separated by spaces: # # time, x-coordinate, y-coordinate, z-coordinate # # for example: 1526927274 -5234.47464 4126.08971 -1266.52201 1526927284 -5272.8012 4094.65453 -1208.06841

Time is stored in unix timestamp format. Unit of distance (unless otherwise specified) is a kilometer. Keplerian orbital elements are stored in the form of a 1×6 numpy array:

kep[0] = semi-major axis kep[1] = eccentricity kep[2] = inclination kep[3] = argument of perigee kep[4] = right ascension of ascending node kep[5] = true anomaly

Similarly, state vectors are stored in the form of a 1×6 numpy array:

s = [rx,ry,rz,vx,vy,vz]

This should be enough to get one started. For more help you can check the documentation.

Last year two methods of filtering were developed – the Savitzky-Golay filter and the Triple Moving Average filter. Two methods of orbit determination were also developed – the Lambert’s method and an interpolation method. However, they were not giving good results for certain kinds of data.

I made a new algorithm that fits an ellipse to the data, similar to fitting a line to a set of points. It gave better results than the previous algorithms so we decided to continue with it.

The real time tracking consists of two parts – propagation of state vector and the Kalman filter. For propagation we decided to use the SGP4 algorithm. It did the propagation excellently but was not stable with the Kalman filter. So I made a simple propagator. It uses numerical integration and takes into account atmospheric drag and the oblateness of the Earth. After making it I spent a lot of time testing it. The only major problem was that the time period did not match with the real time period. A simple fix for this was to manually fiddle with the semi-major axis of the satellite until it gives the correct time period. I tried a lot to fix this problem, but could not do so. Anyway, since it was working well with the Kalman filter, we decided to continue.

Since enough ground stations do not exist yet, we need some sort of artificial data to test the Kalman filter with. So I proposed to make a DGSN simulator module. The basic idea was that it will give the location of the satellite periodically with some noise. In the end the simulator had the following features:

- Noise in spatial coordinates
- Non-uniform interval between observations
- Gaps in observations (to simulate the satellite being out of range)
- Simulation speed control

The simulator in action. It is printing the coordinates every second.

Plotting the data produced by the simulator.

After all this the Kalman filter was left. This was quite straight-forward. I directly implemented the equations on Wikipedia and it worked quite well. However if a time varying covariance matrix is used, it will give better results.

As a side project I made a web application for the ellipse fit algorithm. We created a new repository for this because it is not a necessary component of orbitdeterminator. It is made using the Dash library. Upload a csv file, it will process and give the orbital elements alongwith beautiful graphs.

The web application

- kep_determination/ellipse_fit.py
- propagation/cowell.py
- propagation/simulator.py
- propagation/dgsn_simulator.py
- propagation/kalman_filter.py
- propagation/sgp4_prop.py
- propagation/sgp4_prop_string.py
- tests/test_ellipse_fit.py
- util/anom_conv.py
- util/new_tle_kep_state.py
- util/teme_to_ecef.py
- orbitdeterminator-webapp/app.py

- Integrating Aakash’s SGP4 module with the Kalman filter.
- Fixing the time period problem in the numerical propagator.
- Testing the system with real observations.
- Improving the Kalman filter.
- Deploying the web application on a public server.

Since the last post, the AssignData Window has suffered considerable changes. I had to re-design it from scratch due to some limitations related to JavaFX’s TreeView implementation. Moving items from one tree to another causes them to behave incorrectly. I had to stick to another solution that may not look as smooth and intuitive but performs way better and actually works.

This is what it looks like:

The tree to the left shows all states and charts that the user previously selected. When one is clicked, the right text area shows what variables are currently assigned to it. By clicking the ‚Manage Variables‘ a new window is displayed with all the current sensors and variables from the currently selected protocol:

My first iteration of this last window brought some problems. Tree Views in JavaFX must represent an object and thus, since I wanted to show Charts/States (in the first window) and Sensors/Variables (in the latter window, shown above), I chose to display simple Strings. I quickly realized that this implementation was far from being the best one. The process of fetching all sensors from the protocol and getting their names, creating a tree with those names and then having to find the correct instance of each when selecting one. The cons are obvious: more overhead, possible loss of references, etc… An interface was created to solve this problem, called UIElement. It holds no behaviour (empty interface) and it’s only purpose is used to group different objects that can be displayed in the same TreeView.

Other major problems came when saving a layout to a file. I was outputting all sensor and variable information to the configuration file. All the information from a sensor and data point do not belong there. That’s why the protocol XML file is there for. This brought too much redundancy that was unneeded. Data Transfer Objects (DTO) were created (SensorDTO and VarDTO) to output information to the layout.

As of now, the core/foundation of the process of assigning data is finished. At a given moment, there is a configuration class that tells what variables and sensors should be displayed in a given chart/state. However, there is still some work to do so that the charts and states actually show the data they are assigned to. It’s one of the things that I want to work on, with the help of my mentors after GSoC ends.

Another goal of my project was to facilitate the user interaction with the ground station by using context menus (the menus that usually appear when you right-click an application. The feature was introduced in the past few weeks and I’m pretty happy with it. For now, it holds basic features (hide charts, add/edit/remove elements, etc), but can be expanded to fit any needs that may arise in the future. It’s implementation is pretty straight forward and easy to add new features.

Since I was the only one pushing to the repository during this period, my code can be easily seen by looking through the commits and filter with this time period. However, here’s a git diff that shows all the changed files.

What I did in the last 3 months (all documentation can be found in previous blog posts and the project’s wiki):

- A menu bar that holds many easy-to-access features, like loading protocols, hiding/showing panels, etc;
- Separated the 3 views with a Split Pane, for easier resizing;
- Implementation of a Layout system, where the user can have his own version of the ground station. Layouts can be persisted through a JSON file
- Implementation of the Layout Creator (a tool that eases the process of creating a layout)
- Created a system to assign data to UI elements easily
- Created the Assign Data Window, that shows all current states/charts and lets assign data points to them

- Implementation of panel menus
- Ability to add/remove/edit states directly from the main window
- Ability to add/remove/edit charts as well

- Updated project to Java8
- Small GUI tweaks
- Code cleanup of the overall project
- JavaDoc for all new class files as well as some missing methods from older classes
- Documentation of my work in the project wiki
- Updated out of date pages as well

- Blog posts to document my work throughout the event

Even though Google Summer of Code 2018 has pretty much ended, my work towards this project has not. I gathered a list of tasks/issues that can be worked upon afterwards. These are:

- Update charts and states with the previously assigned data
- Change the way the application handles both configuration and protocol files
- Implement Drag and Drop functionality to add variables
- Fix States/Charts information not being displayed
- Other UI changes

Name: Pedro Portela (Portugal)

Field of study: Software Engineering @ ISEP

Email: pedro.portela@aerospaceresearch.net

]]>

Both the Meteor and Funcube signals have some synchronization bits in them and are sent at regular intervals. Our goal is to detect these bits as accurately as possible. Since this (both Meteor and Funcube) is a digital system if we compare the output bits with the sync bits, we would not find the sync location very accurately, because the bit out frequency is at baud rate. The meteor has a 120 bit long sync bit sequence [1], funcube has a 32 bit long sync sequence [2].

Getting the location of sync at baud rate is simple, we can just slide the needle (sync) across the bitstream until we encounter an almost perfect match (a few bit errors are common). This can be used to accuratey find the sync, as we know the accurate sync will lie close in the range of this crude sync. In order to do this, we can multiply the PLL correction factor (refer to the block diagram) to this range (around the crude sync) of samples at high sampling rate and use an appropriately expanded needle (sync bit sequence) and correlate to find the peak. This peak will be the accurate enough location of the sync.

Here is the result of a sync detection of meteor m2 satellite, we can see that this is a very good result, we can even see the doppler effect in action.

One of the things to keep in mind while decoding meteor is that, since the constallation has four points, it will still look the same if flipped/rotated. So there is not only one sync sequence but several (flip all bits, flip every other bit etc.) Hence all these sequences had to be used to find the sync.

Here is the result of a sync detection of funcube satellite. This is an okay result, but considering that the sync in funcube is only sent once every 5 seconds, I think this is a fair result.

Since funcube signal has only a few khz of bandwidth the doppler effect is very prominant. Hence the correction can be done manually by selecting a -f such that it is almost at the mean position of signal or use the –freqshift flag and the script will automatically attempt to correct the shift. The offset can be calculated by doing a fourier transform and the correction can be done by multiplying every sample with a complex exponential (e^(2*pi*i*offset)) to correct it. The offest is gradually changed so that the PLL can keep up with the changes or even increasing the PLL bandwidth could help.

1. https://www.sigidwiki.com/wiki/Low_Rate_Picture_Transmission_(LRPT)

2. https://www.sigidwiki.com/wiki/FUNcube-1_Telemetry

]]>A quick solver (visma/gui/qsolver.py) has been implemented which dynamically simplifies the expression as the user inputs. Instead of implementing the *logger* which was supposed to report if the input expression is invalid, I have implemented this feature in the *qsolver* itself. It lets the user know what is missing in the input or if the input syntax is wrong.

A basic *plotter* using **matplotlib** was implemented in the first phase which plotted the result in 2D. It is now modified for plotting graphs in both 2D and 3D. The 3D graph is a mesh which resembles the surface of the input. Given the input, *plotter* decides (based on the number of variables and the type of input) in which dimension is the graph to be plotted.

The *plotter* logic is:

if (noOfVars == 1 and eqType == "expression") or (noOfVars == 2 and eqType == "equation"): graphVars, func = plotIn2D(LHStok, RHStok, variables) elif (noOfVars == 2 and eqType == "expression") or (noOfVars == 3 and eqType == "equation"): graphVars, func = plotIn3D(LHStok, RHStok, variables)

The above snippet can be found here.

For example:

- If input is
**x^2**, the plot**f(x) = x^2**is plotted in 2D. - If input is
**x^2 + y^2 = 5**, the plot**x^2 + y^2 – 5 = 0**is plotted in 2D. - If input is
**x^2 – 2*y**, the plot**x^2 – 2y = 0**is plotted in 3D. This case (expression type with 2 variables) needs a fix. A plot for**f(x,y) = x^2 – 2y**must be plotted instead. - If input is
**x^2/2 + y^2/5 = z**, the plot**0.5x^2 + 0.2y^2/5 – z = 0**is plotted in 3D.

Below the *qsolver* and *plotter* can be seen in action.

Almost all of the basic work related to GUI is finished. I will try to enhance the UI by adding settings menu and options to enable or disable certain UI elements. A simple option for enabling/disabling *qsolver* has been already added.

In the coming days, I will package **visma** for all desktop operating systems(Linux, Mac, and Windows). I am looking into pyinstaller for packaging **visma** for windows. Also in the final weeks, I will write docstrings and inline comments for the code, and update the github wiki.

Link to project source and to-do board.

Read on my blog.

]]>During these weeks, I worked on coding a high-level interface for two Gauss method applications of orbit determination: Earth satellites and asteroids.

One of the main differences with respect to my previous work, is that we relied heavily on the

`astropy`

library (www.astropy.org), a Python library for astronomical calculations. One of the main factors which inclined us to use this library, is that it has a lot of useful functions, it is relatively well-tested, gives more consistency to the code, and also avoids typos (e.g., in numerical values of astronomical constants). We also incorporated the `poliastro`

library, mainly to use its implementation of the Stumpff functions.One of the main changes with respect to the previous version of our implementation, was the abstraction of a „core“ computation of the Gauss method and its refinement iterative stage, regardless of the case being handled, whether it is an Earth-orbiting object (satellite, space debris, etc.) or a Sun-orbiting object (main-belt asteroid, Near-Earth asteroid, comet, etc.). This abstraction is implemented in the

`gauss_method_core`

and `gauss_refinement`

function, which are low level functions and may be adapted for user-defined applications.A new feature of the latest version of the code, is that in case of having multiple solutions to the Gauss polynomial, now the user is able to select distinct roots for each occurrence of the multiple solutions. As we will see in examples below, multiple positive solutions to the Gauss polynomial are common when handling real-world data. In the case of finding more than one feasible solution to the Gauss polynomial, the code spits out all the feasible roots found; this allow the user to select the adequate root to the polynomial.

Also, the execution speed was improved for the Near-Earth asteroids case: when handling multiple observations, instead of reading the MPC-formatted file each time and retrieving the relevant triplet of observations, now the file is read only once. Since the observation files from MPC are typically a few thousand lines long, this improved the execution speeds about 10x.

Another important change, is that now the local mean sidereal time is computed using

`astropy`

’s `sidereal_time`

method for `SkyCoord`

objects instead of using our own implementation. While doing this, we were able to identify a bug in our implementation of the computation of the local sidereal time!Finally, in the case of Sun-orbiting bodies, the average orbital elements are computed in the heliocentric J2000.0 ecliptic frame; this allows us to directly compare our results, for example, with the orbital elements listed in the JPL’s Horizons online ephemeris service.

The code needed to run this example may be found in the

`gauss_example_ceres.py`

in the `orbitdeterminator/kep_determination`

folder. There, a high-level interface for processing of multiple observations of MPC-formatted files is used in order to compute a preliminary orbit for Ceres, using 27 observations of this body. The example may be run from the command line simply by doing `$ python gauss_example_ceres.py`

. The only caveat is that, as stated in the previous blog post, the computation relies on using the NASA-JPL DE 430 ephemeris for the Earth and the Sun in SPK format, so in order to run this example the `de430t.bsp`

file must be present in the `kep_determination`

folder. This file may be found at `ssd.jpl.nasa.gov/pub/eph/planets/bsp/de430t.bsp`

and may be downloaded via FTP. We decided against uploading this file to the `orbitdeterminator`

repo, since it’s around 100MB in size.After running a few preliminary code, the example calls the

`gauss_method_mpc`

function, which takes 27 ra/dec observations in total, separated by time intervals of around 1 month, in order to compute the orbit of Ceres. That is, this function read the whole MPC file for Ceres, reads from it only 27 lines, and from it then computes the Ceres orbit using the Gauss method taking three by three observations. We noticed that, if shorter intervals were taken, then the Gauss method showed some convergence problems. We also found that these problems with very short observational arcs for the Gauss method have been described, e.g., in [1].Below, we show the output from the

`gauss_method_mpc`

function. There, the Sun is shown as a yellow dot at the center of the plot. The computed orbit of Ceres is shown in red along with the estimated states using the Gauss method. Also, in blue, the orbit of the Earth is shown.The heliocentric ecliptic average orbital elements (we use

`astropy`

’s value for the mean J2000.0 obliquity of the ecliptic), are:a = 2.768129232786505 au

e = 0.07938073212954426

I = 10.584725369990045 deg

Ω = 80.4066281982582 deg

ω = 73.07336134446685 deg

On the other hand, the heliocentric ecliptic orbital elements reported by NASA-JPL in Horizons are:

a = 2.767218108003098 au

e = 0.07610292126891821

I = 10.60069567603618 deg

Ω = 80.65851514365535 deg

ω = 71.44921526124109 deg

Thus, we see that although some differences exist, our results give a very good first approximation to the orbit of Ceres, using only 27 observations! Indeed, Gauss method is really powerful!

In this example we compute the orbit of the first-ever discovered Near-Earth asteroid: 433 Eros. Here, we use the Gauss method with only 4 ra/dec observations.

For the heliocentric ecliptic orbital elements, we got the following results:

a = 1.4900359723265548 au

e = 0.2222907524713779

I = 10.87373869230558 deg

Ω = 303.89500826351457 deg

ω = 175.09494980703403

JPL Horizons online service tells us, on the other hand, that:

a = 1.458251585462893 au

e = 0.2229072630918923

I = 10.82944790594134 deg

Ω = 304.4109222194975 deg

ω = 178.6283758645153 deg

Again, although we observe some small differences, the results we have are pretty close to the ones computed by JPL. And now, we used only 4 observations!

[1] Milani, A., Gronchi, G. F., Vitturi, M. D. M., & Knežević, Z. (2004). Orbit determination with very short arcs. I admissible regions. *Celestial Mechanics and Dynamical Astronomy*, *90*(1-2), 57-85.

]]>