[GSoC’21|Calibrate-SDR|Ayush] DVB-T for calibration

Hi everyone. I am writing this blog to share my work progress with everyone out there.

I have been working on extending the limitations in the Calibrate-SDR tool by adding support of DVB-T and DVB-T2 (terrestrial) signals. Because these signals are broadly used in Europe, Africa, Australia, and Asia region. So can be used here to provide calibration to more SDR users.

Visit here – http://www.dtvstatus.net/map/map.html

So the question arises What exactly I’m I doing?

As many of you would have worked on some sort of SDRs, might have faced errors due to the Frequency Offset of the Device(due to Crystal oscillators heating).

So here we have a tool named Calibrate-SDR to save you from correcting frequency offset repetitively. Calibrate -SDR is based on the idea of synchronization of devices by a constant frequency part present in the signal. This tool currently uses DAB+ signals to calculate the PPM shifting in frequency. I am enhancing it by using the DVB-T signal for this purpose and try to help more people out there.

Further reading about initial Calibrate-SDR refer to this blog.

Some words of wisdom about DVB-T signal

DVB-T, short for Digital Video Broadcasting — Terrestrial, is the DVB European-based consortium standard for the broadcast transmission of digital terrestrial television that was first published in 1997[1] and first broadcast in Singapore in February 1998. This system transmits compressed digital audio, digital video, and other data in a MPEG transport stream, using coded orthogonal frequency-division multiplexing (COFDM or OFDM) modulation. It is also the format widely used worldwide (including North America) for Electronic News Gathering for transmission of video and audio from a mobile newsgathering vehicle to a central receive point.

Wikipedia

Thanks to Wikipedia for providing historical details about this signal.

Some Technical Details about signal.

Would suggest reading the technical standard for more detailed idea about it.

https://www.etsi.org/deliver/etsi_en/300700_300799/300744/01.06.02_60/en_300744v010602p.pdf

I would cover only the part that was of value for me. Going through this paper and some research. I found out that DVB-T signals have a constant part called pilot inside the ODFM frame structure of DVB-T.

Visit etsi.org for better image.

So in addition to the transmitted data an OFDM frame contains:

– scattered pilot cells;

– continual pilot carriers;

– TPS carriers.

The modulation of all data cells is normalized so that E[c × c∗]= 1.

All cells which are continual or scattered pilots are transmitted at “boosted power level” so that for these E[c ×c∗] = 16/9.

The pilots can be used for frame synchronization, frequency synchronization, time synchronization, channel estimation, transmission mode identification and can also be used to follow the phase noise.

The carriers are determined by Kmin = 0 and Kmax = 1 704 in2K mode and 6 816 in 8K mode respectively. The spacing between adjacent carriers is 1/TU while the spacing between carriers Kmin and Kmax are determined by (K-1)/TU.

The numerical values for the OFDM parameters for the 8K and2K modes are given in tables for 8 MHz channels, for 6 MHz and 7 MHz channels.

We would collect some continual pilots and average them to get an overall current frequency. We would create an array of all the indexes of the continual pilot and use it.

Then we would subtract them with the known frequency of DVB-T. Hence, we would have the PPM shift. So that’s much of what we are doing for our tool.

It’s more interesting to work on once done with the boring Research work.

Anonymous Developer

Links –

Link to working repository is – https://github.com/AerospaceResearch/CalibrateSDR/tree/dvbt

I am attaching a test file too for a better understanding of this tool. Test file

Some more paper –

https://ca.rstenpresser.de/~cpresser/tmp/dvbt_7_paper.pdf

https://www.ese.wustl.edu/~nehorai/paper/Radar_Harms.pdf

http://ntur.lib.ntu.edu.tw/bitstream/246246/200704191002918/1/01258670.pdf

More Chat on-

https://www.linkedin.com/in/ayush-singh-101/

https://aerospaceresearch.zulipchat.com/#narrow/stream/281823-CalibrateSDR/topic/Signal.3A.20DVB-T

[GSoC2021] CalibrateSDR GSM Support – first coding period

Introduction

CalibrateSDR developed by Andreas Hornig is working perfectly with signals DAB+. We can use the python program to calibrate SDR devices. As part of the Google Summer of Code, I have been working around GSM Signal Standard to make CalibrateSDR compatible with it.

Before moving on, please read the initial blog on using CalibrateSDR, written by our mentor. The primary focus of this project is to extend the applicability towards more signal standards, so as to make it helpful for the SDR community. As DAB+ is mainly used in Europe, Signal standards like GSM, LTE, NOAA-Weather Satellites (use their sync pulses within the data) can be used.

Currently, it uses the pyrtlsdr package, which makes it work with RTL-SDR. Piping the API to work with other SDRs will also make the project have a wide range of applications in the SDR Community.

My first weeks of coding relied generally on implementing GSM signals. Working with the GSM frequency correction channel to calculate the offset is my primary task.

Proposed final working of CalibrateSDR

Working of GSM to calibrate the SDRs

GSM uses time division to share a frequency channel among users. Each frequency is divided into blocks of time that are known as time-slots. There are 8 time-slots that are numbers TS0 – TS7. Each time slot lasts about 576.9  μs. The bit rate is 270.833 kb/s, a total of 156.25 bits can be transmitted in each slot.

Each slot allocates 8.25 of its „bits time“ as guard-time, split between the beginning and the end of each time slot. Data transmitted within each time slot is called a burst. There are several types of bursts.

„Frequency correction“ burst, which is a burst of a pure frequency tone at 1/4th the bitrate of GSM or (1625000 / 6) / 4 = 67708.3 Hz. By searching a channel for this pure tone, we can determine its clock offset by determining how far away from 67708.3Hz the received frequency is.

How is it working?

A more robust way is to implement a hybrid of the FFT and filter methods. We could use the adaptive filter as described in the paper: G. Narendra Varma, Usha Sahu, G. Prabhu Charan, Robust Frequency Burst Detection Algorithm for GSM/GPRS (https://ieeexplore.ieee.org/document/1404796)

After finding the position of FCCH bursts that we receive in SDRs, it will be easy to calculate the offset. We measure it by how much shifted is the FCCH burst, than what we expect, that is at 67708.3 Hz from the frequency centre. Simply, if no offset is there, we could see these tone bursts at 67708.3 Hz offset with respect to the centre frequency of the channel.

I have completed the program to output the channel frequencies from the given ARFCN numbers of GSM. Check the code for the same here.

Detect and visualize the FCCH bursts!


Currently, the program has been tested only with IQ Wav file recordings. Even though the code has been designed to work with RTLSDR sticks, it is not tested yet with a live connection. Find the link to test files here.

After cloning the repo locally, run the setup to install the requirements using the command: python setup.py install

To test with GSM files, run:

python cali.py -m gsm -f <location of wav file> -rs <sampling rate> -fc <frequency center>

First, we will get the plot of the average power spectrum plot. Play with the code to increase the N value, and you can see the sharpness of the line.

The figure above shows the TDMA frames generated by the GSM Signal.

To determine the FCCH bursts from the signal, plot the spectrogram, using the function present in gsm.py. The spectrogram_plot function will do the fft and outputs the figure.

The generated FCCH bursts detection can be visualized as shown below:

We can see the pure tone FCCH bursts are occurring at specific intervals and can be visualized as small blue dots at a range of 0.25 from the centre.

Thus implementation of a filter bank and calculating the positions of these FCCH bursts will give us the offset frequency since we know these FCCH bursts occur at a distance of 67708.3 Hz from the frequency centre.

Will update the code after testing with gsm channels, and the function to calculate the final frequency offset will be committed to the repo.

The workarounds for the second coding period are implementation of LTE and NWS as well as the bridging of a more generalised SDR API, SoapySDR API, which has Python bindings to use as well.

Find out the project updates in my branch here: https://github.com/aerospaceresearch/CalibrateSDR/tree/jyrj_dev

References:

[GSoC2021: findsatbyrf] Summary of the first 5 weeks

By Binh-Minh Tran-Huu on 2021-07-15

  1. Introduction

Because of the recent sharp growth of the satellite industry, it is necessary to have free, accessible, open-source software to analyze satellite signals and track them. To achieve that, as one of the most essential steps, those applications must calculate the exact centers of the input satellite signals in the frequency domain. My project is initiated to accommodate this requirement. It aims to provide a program that can reliably detect satellite signals and find their exact frequency centers with high precision, thus providing important signal analysis and satellite tracking statistics.

  1. Overview

The project aims to locate the exact centers of given satellite signals with the desired accuracy of 1kHz, based on several different methods of finding the center. At first, the center-of-mass approach will be used to determine the rough location of the center. More algorithms will be applied from that location depending on the type of the signal to find the signal center with higher accuracy. 

Currently, for many NOAA signals, with the center-of-mass and “signal peak finding” approach (that will be shown below), we can get results with standard errors less than 1 kHz. For example, with the following signal, the standard error is 0.00378 kHz.

  1. Theoretical basis

The overall flowchart

  1. Fast-Fourier Transform (FFT)

Fourier Transform is a well-known algorithm to transform a signal from the time domain into the frequency domain. It extracts all the frequencies and their contributions to the total actual signal. More information could be found at Discrete Fourier transform

Fast-Fourier Transform is Fourier Transform but uses intelligent ways to reduce the time complexity, thus reducing the time it takes to transform the signal.

  1. Noise reduction and background signal reset

There is always noise in actual signals, but generally, noise has two important characteristics: normally distributed and its amplitude does not change much by frequency. You can see the signal noise in the following figure:

If we can divide the signal in the frequency domain into many parts such that we are sure that at least one of them contains only noise, we can use that part to determine the strength of the noise.

For example, consider only this signal segment:

By taking its average, we can find where the noise is located relative to the amplitude 0. By subtracting the whole signal to this average, we can ensure the noise all lies around the zero amplitude.

Next, we want to reduce all the noise to zero. To do that, we consider the distribution of noise, which is a normal distribution.

Photo from Characteristics of a Normal Distribution.

From this distribution, we are sure that 99.9% of noise has an amplitude less than three times the noise standard deviation. If we shift the whole signal down by 3 times this standard deviation, 99.9% of the noise will have amplitude less than 0.

From there, we can just remove every part of the signal with an amplitude less than zero. Then we will be able to get a signal without noise with the background having been reset to 0. 

You can clearly see the effect of this algorithm by looking at the signal of PIXL1 satellite above, where all the noise has been shifted to below 0.

  1. Center-of-mass centering

This algorithm is simple, the centroid position is calculated as (sum of (amplitude x position)) / (sum of amplitude), similar to how we calculate the center of mass in physics. The result of this algorithm is called the spectral centroid, more information could be found at Spectral centroid.

  1. Peak finding.

For signals with clear peaks such as APT(NOAA), finding the exact central peak points of the signal would give us good results. From the rough location of the center by the Center-of-mass method, we can scan for its neighbor to find the maximum peak. This peak will be the center of the signal that we want to find. 

For APT signals, this peak is very narrow, therefore this method is able to give us very high precision.

  1. Predicted signal centers from TLE

TLE (Two-line element set) information of a satellite can be used to determine the position and velocity of that satellite in the orbit. By using this data of position and velocity, we can calculate the relativistic Doppler effect caused by the location and movement of the satellite to calculate the signal frequency that we expect to receive on the ground.

  1. Error calculation.

Assume TLE gives us the correct result of signal center, we can calculate the standard error of the result by calculating the standard deviation:

Where n is the number of samples, x_i is the difference between our calculated center frequency from .wav and the frequency we get from TLE.

  1. Implementation in code:

https://github.com/aerospaceresearch/findsatbyrf/tree/bm_dev

  • main.py is where initial parameters are stored. The program is executed when this file is run.
  • tracker.py stores the Signal object, which is the python object that stores every information about a signal and the functions to find its center.
  • tools.py contains the functions necessary for our calculation and the TLE object used for center prediction.
  • signal_io.py stores functions and objects related to the input and output of our signal.
  1. Current results: For APT(NOAA):

Standard error = 0.00378 kHz

  1. Usage instruction:
    1. Create a folder with the name of your choice in the ‘findsat’ folder, for example, ‘data’.
    2. Put the .wav file of your signal in this folder with any name. 
    3. Put a satellite.tle file in the folder containing the TLE of your satellite.
    4. Put a station.txt file in the folder containing the name and location of the ground station where you recorded the signal separated by a “=”, for example (The numbers should be in degrees and meters):
    5. Change the content of main.py as instructed in the code, then run main.py. The result will be put in your ‘data’ folder.

name = Stuttgart

long = 9.2356

lat = 48.777

alt = 200.0

Example of station.txt

Future improvements

  • Enable running the program directly from the command line instead of opening a python file before running.
  • Add more methods to find the signal centers for other signal types.

Demonstration of the code. This type of video consumes too much memory therefore it is not used anymore, but this function could be reintroduced in the future.

Code demonstration

Implementation of data analysis compilation interface in a satellite monitoring simulator

What is a cubesat?

It consists of a modular cube of 10x10x10 cm with a mass of 1.33 kg, which is known as 1-unit (1U) CubeSat.

Cubesats were created to provide a low-cost, flexible and quick-build alternative to reach the space, in a few words a cubesat is a satellite with a cubic form.

Virginia CubeSat Constellation

Cubesat are now affordable tools for teaching and researching for Universities and Research Centers. Although they are simple platforms, their complexity can be increased. Simulation of the capability of these subsystems is the first step to assess the convenience to include such subsystems in a platform. In this case, Implementation of sensor and actuator models in a CubeSat Simulator and visualization.

Software and hardware specifications and parameters:

The parameters were selected by Javier Sanz Lobo in his master thesis “ Design of a Failure Detection Isolation and Recovery System for Cubesats“

Leo ( Low earth orbit) was selected by its highly demanding orbit maintenance and pointing accuracy. The satellite selected to perform is a 2U CubeSat.
Therefore, its size is 10x20x10 cm, and it weights 2.66 kg. The image illustrates the relative position of the main elements with respect to the body axis, the Earth and the orbit motion. The propulsion system was placed in the opposite face to the orbit motion to counteract the drag and the optical payload pointing to nadir(a selected face to always be syncronized to Earth).

Example proposed in master thesis:

Internally and intrinsically how does the software work?

Attitude and Orbital Control System Elements included for calculations summary:

1.Control:

1.1 Attitude control:

1.1.1 Attitude controller, magnertorquer, reaction wheel allocation, reaction wheel controller and reaction wheel model.

1.2 Orbit Controll:

1.2.1 Thruster allocation, orbit controller and thruster model

2.Dynamics:

2.1 Six degree of freedom model: attitude dynamics and kinematics(with quaternions evolution)

2.2 Environment:

2.2.1 Atmospheric drag, gravity gradient torque, J2 perturbation, magnetic torque and third body perturbation

2.3 Keplerian Orbit: Vectors normalizations

3. FDIR: with frozen and/or sudden death signals included if desired

3.1Gyroscopes FDIR (Control panel included)

3.2 Reaction Wheels FDIR (Control panel included)

3.3Thrusters FDIR (Control panel included)

4. Guidance:

4.1 Directions cosine matrix to quaternions (positive and negative traces included)

5. Navigation: Sensors and actuators included.

5.1 Attitude Filter, GNSS, Gyroscope, Orbit Filter, and startracker

6. Visualization: 3D orbit, groundtrack and COE

Interface created with improved and modifiables 3D models:

Input data:

Output interface results:

Expected Visualization Results:

https://summerofcode.withgoogle.com/projects/?sp-search=R.David%20A#4750821465522176

Call for Google Summer of Code 2021! Be our Summer Student and code your open-source space projects

Again for the 7th time, AerospaceResearch.net[0] is proud to be selected as an official mentoring organization for the Summer of Code 2021 (GSOC) program run by Google[1].
And we are now looking for students to spend their summers coding on great open-source space software, getting paid by Google, releasing scientific papers about their projects and supporting the open-source space community.

Until 13th ot April 2021, students can apply for an hands on experience with applied space programs. As an umbrella organisation, AerospaceResearch.net and ep2lab of Carlos III University of Madrid are offering you various coding ideas[2] to work on:

  • The Distributed Ground Station Network – global tracking and communication with small-satellites[2][4]
  • ep2lab of Carlos III University of Madrid[2]
  • or your very own proposal![2]

If you are a student, take your giant leap into the space community, realizing your very own space software, and the chance to be recognized by Google headhunters.
If you are professor, feel free to propose this great opportunity to your students or even have your projects being coded and realized!

During the last years, we mentored more than 21 students during Summer of Code campaigns[6] and now, we achieved several great things together. We have released several papers. We spent computing power worth 60,000 PCs to those students projects, even helping their bachelor theses, and indirectly supporting the IMEX program[5] by the European Space Agency(ESA). And as a surprise and an honor for us, we had been on plenary stage with Canadian astronaut Chris Hadfield to promote those projects during the International Astronautical Congress 2014 in Toronto.

We want to repeat that success, and now it’s your turn to be active in open-source space!

Apply today, find all projects on the GSOC webpage![1]
We are waiting for you,

Andreas Hornig, Head of Platform

[0] https://aerospaceresearch.net/?page_id=2156
[1] https://summerofcode.withgoogle.com
[2] https://aerospaceresearch.net/?page_id=2156#codingideas
[3] http://ksat-stuttgart.de
[4] https://www.youtube.com/watch?v=TC4Ls3AGHf4
[5] https://www.youtube.com/watch?v=FY0vjbBp4eg
[6] https://www.youtube.com/watch?v=gkklxZxjT-8&list=PL-lXf3kTWgqybFL-VOmVxKyjnrVPE7DBB

Feel free to forward this email to whomever you think it may concern!

### More Information ###

# About Google Summer of Code (GSOC)[1]:
Google Summer of Code is a global program focused on introducing students to open source software development. Students work on a 3 month programming project with an open source organization during their break from university.

Since its inception in 2005, the program has brought together 12,000+ student participants and 11,000 mentors from over 127 countries worldwide. Google Summer of Code has produced 30,000,000+ lines of code for 568 open source organizations.

As a part of Google Summer of Code, student participants are paired with a mentor from the participating organizations, gaining exposure to real-world software development and techniques. Students have the opportunity to spend the break between their school semesters earning a stipend while working in areas related to their interests.

In turn, the participating organizations are able to identify and bring in new developers who implement new features and hopefully continue to contribute to open source even after the program is over. Most importantly, more code is created and released for the use and benefit of all.

# About AerospaceResearch.net[0]:
We are a DGLR young academics group at the University of Stuttgart for aerospace related simulations applying distributed computing. Our global citizen scientists community of 15,000 users are donating their idle computing time of 60,000 computers and forming a virtual super computer connected via the Internet. And this massive network is used for solving difficult space numerics or for sensor applications. We are bringing space down to Earth and supporting the space community from students to organizations.

# Distributed Ground Station Network [DGSN]:
The Distributed Ground Station Network is a system for tracking and communication with small satellites and other aerial vehicles. The concept includes a global network of small and cheap ground stations that track beacon signals sent by the satellite, plane or balloon. The ground stations are located at ordinary people at home, so called citizen scientists, and are connected via the Internet. A broadcasted beacon signal is received by at least 5 stations and can be used then for trilateration to obtain the position of the signal’s origin. For this each ground station correlates the received signal with the precise reception time, which is globally provided and synchronized by GPS. This shall help small satellite provider and even Google’s Loon project to be able to track their vehicles fast, globally and simple!

Hacktoberfest 2020: It is this hacking time of the year again!

It is time again for Hacktoberfest and we already took part in it with our projects and and in related projects. We will hack together, finish stuff for the GSOC projects and even some of us will take part in the virtual NASA SpaceApps Challenge. All of these open source projects can use your help. So you will do something good and also earn a brand new, limited hacktoberfest shirt!

„Hacktoberfest 2020: It is this hacking time of the year again!“ weiterlesen

[GSoC 2020 | MOLTO | Brandon] Refactorization MOLTO

Introduction

Hi, my name is Brandon Escamilla, I am an Aerospace Engineer by Universidad Marista de Guadalajara (Guadalajara, México). This is the second time I got selected for the GSoC. I am really grateful for this opportunity. This time, I came along with a proposal to improve the actual MOLTO project. The same I did work the last year. MOLTO is a big project, which has tons of work before going to production. Last year, I worked in a way to connect MOLTO with a user interface, it was not an easy work since MOLTO is a Matlab tool which requires special connections and is not as easy as consuming a normal API, so I did create an API with Python/Flask to communicate my requests from the Frontend to Matlab directly. It did work, we did communicate successfully with the Matlab tool. But there were some problems we need to resolve in order to have a „production environment“. At the end of the GSoC, we had a UI created in React.js, an API using Flask, and a local database using SQLite. This was enough to prove the architecture I wrote in my proposal, easy architecture to get to the web those projects created in Matlab which can’t go to production because Matlab closed license.

Last year blog: [GSoC 19′ |UC3M ] MOLTO – Mission Designer]

So, here you have the new flow of MOLTO from a route-based perspective, and from a logical flow.

The main issues that needed to be solved before to go to production were the ones I will list below:

  1. Error communication between Matlab and Python.
  2. Delete Real-time communication between UI – Python – Matlab. (Sockets)
  3. Create a new service based on Codes to retrieve the mission.

But there were also some improvements needed to be added with lower priority:

  1. Friendly user interface for new users.
  2. New Design
  3. Create a database in production to save missions configuration and results.
  4. Create an email service to send mission codes.
  5. Create CMS to add information in an easy way for maintainers.
  6. New view for the service based on codes.
  7. Improve deployment of Frontend.
  8. Response optimization from Matlab Genetic Algorithm.
  9. Toy problem for new users.
  10. Optimize responsive views of the site.
  11. Improve Celery implementation for Background tasks.

Before continuing, I would like to add that after GSoC, I continued working on MOLTO, adding small features, and improving user experience as fas as I could. In our constant communication between my last mentor David and I, he did invite me to start a research stay in the University Carlos III where he was doing his Doctorate. This led to another experience where David has been my bachelor’s degree thesis advisor. (A good story thanks to GSoC! 😃)

Hands-On!

Once accepted, I started working on my proposal which did look something like this:

1. CMS Implementation

For this purpose, I did use an open-source application called Strapi, which allows you to develop a CMS locally and in production in an easy way. It helps you to develop the database, API, endpoints, CDN requests, models, emails, and more…

I did install Strapi locally and started creating the services were needed for MOLTO which can be divided in this way:

  1. Collaborators
  2. Missions
  3. Users
  4. Email Service
  5. Motors

Once I create all the services described before, I just started to launch the CMS to production. The easiest way to do this is using Heroku, which allows you to have an app in production with very few steps and configuration. Finally, you can find the Admin in this URL: https://molto-admin.herokuapp.com/admin, Of course, it has a login and just the maintainers of MOLTO can access. But I leave you a few screenshots, so you can see the interface.

I am using a PostgreSQL database which actually is a plugin from Heroku app. ✅

And this is the documentation of the endpoints:

https://molto-admin.herokuapp.com/documentation/v1.0.0

2. New design MOLTO

Before going to production we needed a new design because the first one was more like an MVP, finally, I added a link to the old site, the designs, and the newly implemented design. Almost all the components of the website did change, from the home to MOLTO-IT, and also new views were added.

Here I will leave you some screenshots of the old design and the new design. (I will include the links in case you want to check it out )

New site: https://molto-it-ui-old.vercel.app/

New site: https://molto-it-ui.vercel.app/

Another new feature is in the motors section of MOLTO-IT, where you can actually see the motor configuration clicking in more information.

Another feature is the mobile version of MOLTO and the new menu, at this moment is really important to have a good mobile version of websites, since most users will visit your site from their cellphones. So, I did refactor the mobile version and right now is good to work from cellphones.

3. Tour MOLTO

In order to have a better experience in MOLTO, I did try a lot of ways to do it. I started using a library called React tour, and after another one called React joyride, but I did notice that those were very intrusive with the user experience and that actually the performance of the application was really bad when using both libraries. So I did prefer to create my own component which shows an information icon and if you hover on desktop or click it in mobile, display a box with useful information in order to know what to add in the inputs or what are the inputs for.

I found this way less intrusive and useful in my opinion. Here you have one screenshot of this component.

4. New service for creating a mission or search for a mission created.

One of the requirements of the last year was to create a UI with the possibility to see how the genetic algorithm evolves in the time. This was possible, but also a bad idea from a user perspective. Once they select their configuration, they needed to be waiting for the response of the API, this time could go from 3 minutes to 10 minutes – more generations, more population, more time-, which is the time Matlab started the genetic algorithm, in this point Flask opened a socket to start consuming the files were being created in real-time in a directory of the server where MOLTO lives. This was working well but just in one situation: A mission with really low generations and population, since this kind of missions will return results fast. So, once you started a mission with more than 30 generations and more than 50 population, you needed to wait a lot of time before the sockets could return the first generation, and this leads to another problem if the user wanted to see the final generation, the user needed to wait from minutes to hours, without closing the browsers – once you closed the browser, the socket connection finished-. So in the meantime between GSoC and GSoC, I created a new architecture based on codes where you create your mission and the website returns you a code, and also the possibility to send the code to your email, so you could return, in 1 hour, 1 day or 1 week, and all your results will be there stored in the database. Of course, this was a lot of work, I need to almost change most of the logical code was created to connect with Python via sockets. And also new views needed to be created to retrieve the mission, send the code to email, etc.

Almost in the middle of the second evaluation I start working on this, and after days of coding, the new service was available, here I will add few screenshots, but of course, you could to the MOLTO website.

The view which looks for missions, has an input where you need to put the code MOLTO gives you when you finished the configuration of your mission, this input has the ability to detect invalid codes, and also returns the current status of the mission. This is thanks to Celery, which is a tool that before was just running tasks in the background, but with the proper configuration, you can check the status of the mission in real-time. 🛰

4. Celery and Matlab Errors

As I said at the top of this post, one big problem was that I had issues trying to connect Matlab errors with python, due to this once a mission failed, I didn’t know the real reason why it was failing. At the start, it was not a problem, because I was using always the same JSON for creating missions and testing. But once you put different configurations, it was randomly working, sometimes it works, sometimes it just crashes, and I didn’t know what was happening. So, this year, I decided to solve this issue, as said before, using Celery properly, and also the Matlab Engine For Python.

The big issue was that I was not adding some configurations to check the tasks in the UI of Celery called flower, and I was also not using some configuration to read logs from Matlab in python, It was a hard task, but finally, it is working, so I will put here one screenshot of the logs I am receiving in the server where I can know exactly why Matlab is not working.

I can also see all the missions in real-time in a dashboard, all the missions that failed, all the successful missions, and also the tasks that are running.

5. New host for Frontend

There are a lot of ways to host your frontend applications, we can host our frontend in the MOLTO server, or maybe in another service like AWS, etc. But I recently started using Vercel to host other projects, and it was a great experience since you can have multiple environments for testing, production, development, etc. All of this in one place, connected to your repository in Github or to your CLI. It makes easier the development and that’s why it is the platform MOLTO will be using for frontend hosting.

We have right now two environments dev, and production, all the changes that will be applied to the UI of MOLTO will pass first by dev, after approval all these changes could be applied to production.

6. Toy problem

One problem I faced when I was demonstrating MOLTO was that I was the only person who knows how to use it. That was a problem because you can’t deploy an application to production if it is not intuitive.

In order to improve this situation, I used the based architecture of data management called Redux to pre-load a problem, so every time you enter to MOLTO-IT you can change just the name and go to the last tab, click send, and here you have a useful mission, from Earth to Jupyter. So you can test this mission, and actually see the Pareto front without any problem.

7. New flow with code – Pareto Front

I’ve been talking about the new flow but I didn’t show you how it looks after you put a code that has a finished state. Well, actually, this view has some improvements also.

The first one is that you are able to see all the results from generation 1 to the last generation with its respective results. So you can test any value to plot the orbit. You can also see in real-time how the chart changes once you select another Pareto point.

Conclusion

It was an honor to work again in the Google Summer of Code 2020, finally, I would say I finished what I proposed at the start of this project. I also want to thank all the persons who make this possible, Dr. Manuel Sanjurjo, Dr. David Morante for guiding me, and helping me every time I have issues or problems to resolve. I also want to thank you for the research stay at UC3M, I hope I can continue working along with both in this and other projects.

I also want to thank Andreas Hornig for being there for any question and always provide the necessary stuff to keep working. Also for always remember me the deadlines 😅, and pushing me to give the best of me.

As far as I know, this is the last GSoC in which I can contribute as a student ☹️, but my next goal is to keep contributing to open-source and why not contribute also as a mentor if possible in the next GSoC’s. I would really like to share all that I’ve learned during these 2 years. 🛰

Thank you for reading.

Brandon Escamilla

Useful resources:

  1. Production website: molto-it-ui.vercel.app
  2. Repository: https://github.com/uc3m-aerospace/MOLTO-IT
  3. Email: brandon.escamilla@outlook.com | brandon.escamilla@aerospaceresearch.net

[GSoC2020|OrbitDeterminator|Nikhil] Tracking Continuous and Sporadic Signal of Satellites – Wrapping Up

Introduction

With increasing popularity in CubeSat technologies, it has gotten ever so important to have low-cost systems that complement the economical and self-reliant nature of today’s cubesats providers. One of the most important parts of an end to end small satellite business is ground-based tracking. Satellite tracking provides valuable information on the whereabouts. Satellite tracking industry is booming with the use of large antennas and high power transmitters at cost-prohibitive nature but at the cost of expense and lead time. 

It is thus important to use an alternative tracking method, for example, Doppler Tracking. Doppler based orbit determination uses a doppler frequency shift to convert to a distance problem. To do doppler tracking, one has to first track the frequency of the signal. This way the cost of the tracking system is kept low because equipment needs beyond the essential receiver are small, at a minimum consisting of an amplifier and a variable oscillator. This project aims to provide a universal tracking solution for burst and continuous type signals of satellites.

Overview

This project aims to have a universal tracker for sporadic and continuous type signals. This requires the above workflow. Overall there are three main stages of processing before we arrive at our final track. Every stage has its own function and uses a particular algorithm. 

  • Stage 1: Pre-Processing
  • Stage 2: Decision Making
  • Stage 3: Tracking

Waterfall

Before the pre-processing stage, it’s important that we have our signal in the frequency domain, by taking the Fourier Transform. So, the program performs the FFT in chunks to improve memory performance and runtime. It then selects the desired channels of a specific bandwidth, as per the user’s requirement.

Signal Detection

FFT Averaging

Before we make a decision, whether a certain FFT frame has the signal or not, we need to remove some consistent artefacts present throughout the duration of the recording. 

The basic idea of averaging for spectral noise reduction is the same as arithmetic averaging to find a mean value. This operation is a type of low-pass filtering that can reduce high-frequency noise.

APRS signal example

Calculating an average spectrum involves averaging across common frequencies in multiple spectra. So we subtract an average spectral frame from the sample frame in question. This improves measurement accuracy and also helps to compensate for a low signal-to-noise ratio.

Decision

The decision of whether a signal exists in a given FFT frame is done by checking the neighbouring frequency bins of a sample bin (n) that all have bin magnitudes greater than that of a dynamic threshold.

an illustration of the decision making

This threshold is calculated as follows:
Threshold = Mean + SD + safety gap

Black indicates selected bins
A NOAA signal’s full spectra; green(channel selected)

Tracking

Finding the center

Once the signal is found in a particular FFT frame, it is a matter of finding the centre of the geometric signal. To cover most signal types a generic approach has to be taken. This is why a spectral centroid is a good enough representation of the signal center.  A spectral centroid analogous to geometric center and refers to the balance point of the signal.

x(n) represents the weighted frequency value, or magnitude, of bin number n, and f(n) represents the center frequency of that bin.

Track Smoothing

The frequency track of the signal through the recording is curve fitted with a polynomial function of order 3. It is also important to remove outliers before fitting the data.

Waterfall (white-raw track, black-filtered track)
NOAA Waterfall Signal Track (white-raw track, black-filtered track)
APRS Waterfall Signal Track – (white- raw track, black – fitted track)
APRS Waterfall Signal Track (BW-10kHz) – (white- raw track, black – fitted track)

Outputs

The program can output spectral frame and waterfall plots of multi-channels and bandwidths specified by the user. The frequency track of the signal from the specified channels, when found, is finally stored in a JSON file. 

SignalChannel FrequencyBWWaterfallData
NOAA 137.62 Mhz32 kHzwaterfalljson
APRS-1145.825 Mhz10 kHz waterfall json
APRS-2 145.825 Mhz 10 kHz waterfalljson

Acknowledgement 

In the end, I would like to thank AerospaceResearch for giving me the incredible opportunity to work with them in Google Summer of Code 2020. I have learned a great deal and this journey has solidified my belief in open source for space. I would also like to thank Andreas Hornig for being the mentor of this project and extending his guidance and support, whenever needed.

Links


[GSoC 2020 | MOLTO-3BP | Ginés] Finite Fourier series approximation

1. Introduction

I am Ginés Salar, Aerospace Engineer by University Carlos III (Madrid, Spain). As this years‘ GSoC edition comes to an end, allow me an opportunity to give a comprehensive explanation of my contributions to aerospaceresearch.net. From my university’s department of aerospace research, there is an interest to develop and test preliminary trajectory optimizers. This has led, in recent years, to the development of MOLTO (Multi-Objective Low-Thrust Optimizer). Such a tool would provide a two-step optimization process for one in three scenarios conceived: IT (Inteplanetary Transfers), OR (Orbit Raising) and 3BP (Three Body Problem). In this post I will not go into the details of these engines but I strongly advice the interested reader to access [GSoC 19′ |UC3M ] MOLTO – Mission Designer.

My efforts try to improve upon the MOLTO-3BP, specifically the first step. The classical approach to this problem searches for a set of ballistic trajectories patched by instantaneous impulses. It is assumed that by reducing the fuel consumption of these impulses, the initial guess improves. After that, the engine moves to the second step and introduces the actual control optimization to adapt the orbit to a truly low-thrust mission. There is little knowledge about whether this procedure delivers the best result or merely a local minimum. Providing an answer to this question is what motivated the work done.

2. Work Breakdown

Parallel works by other students have attempted to provide a database with sampled non-keplerian periodic orbits. Initially, these efforts were aimed to replicate real missions, or to try to improve them using their objectives as a guide. The purpose of this database would be to propagate invariant manifolds from the orbits in order to find ballistic transfers that involve libration point orbits. Finally, the aforementioned patching process is carried out by selecting a suitable Poincaré section and analyzing the trajectories‘ intersections with this surface.

Under this environment, my main objectives were:

  • Generalize the capabilities of these functions.
  • Structure the code into a single body.
  • Translate the existing code to Python.
  • Emulate libration point orbits transfers.
  • Provide a new metric for the trajectories‘ suitability with a shape-based approach.

The first point, was to isolate all constants from the rest of the code and allow an easy access and control of the studied system. Furthermore, the most interesting libration points, due to the small Jacobi constant required to access their neighbourhood, are the collinear points L1 and L2. We decided that the program should extensively cover both points and the motions around them. This way, the basic building blocks used previously to compute Halo and Lyapunov orbits were extended to be applied to L1 and L2, and tested for the Sun + Earth & Moon system and the Earth + Moon system.

Simple Input & Output example

From that, it is important to remember that the purpose of these orbits is to propagate the disturbed trajectories that emanate from them. This promotes the idea of generating a common access point that joins orbit creation and manifold propagation, as well as, post-processing. On that line, I homogenized the input/output requirements of both orbit families and standardized the procedure to any future orbit family. The idea still holds for any non-periodic trajectory that in turn becomes relevant to propagate manifolds from. These steps were crutial to figure out as they are particularly relevant for an eventual integration of this code as part of MOLTO-3BP.

Another relevant point is that the original code requires a Matlab’s licence. This reduces drastically the code’s accessibility from any external sources. This could be easily avoided by converting the code into an open-source language with similar inner workings. The obvious candidate was using Python as it is also a high-level language with similar flexibility to Matlab’s. This change also introduces the possibility of using any of the many freely available modules, such as numpy, scipy, spiceypy and matplotlib. This way of proceeding not only reduces programming time, but also execution times. Additionally, the program can implement features not currently present in Matlab, like the explicit Runge-Kutta method of order 8 included in scipy’s suite. This allows for more precise computations for the most sensitive problems.

Manifold’s 3D plot for the study of L1 (left, black) to L2 (right, black) halo orbits. Unstable (exiting) trajectories in red. Stable (converging) trajectories in blue. Poincaré surface to register the intersections in yellow.

Following the reshaping of the code, several testing ideas and possible future developments arose. One of the most relevant was the concept of emulating complex sequential orbit transfers, both homoclinic and heteroclinic. The code was provided with the necessary tools to discriminate which manifolds where required by the process plus the ability to iterate both the orbit generation and the manifold propagation processes.

Phase-space representation at Poincaré section.

Finally, the end objective was to reach a better understanding of the suitability conditions in order to provide better decision metrics for future optimizers handling this problem. This section is still under development. The initial idea still remains: reduce the ballistic trajectories to their complex frequencies, compare them, and deduce a figure of merit, such as the delta-v required for jumping among them. Preliminary frequency analysis have been started on top of the tools developed, specially for the 2D simpler case. There are already some promising results but much testing is still required to be able to ensure good performance.

3. Acknowledgments

In order to conclude, I would like to thank Manuel Sanjurjo for his constant and agile support during this enterprise. Without his vision this process would not have been nearly as smooth. Also, I thank David Morante, responsible for the creation of MOLTO, for assisting along the way. On a similar note, I take this opportunity to mention Andreas Hornig, as a fine and efficient manager of the community. Everything has been perfectly clear right from the start. Last but not least, I thank Google for running this program and give this sort of opportunities to students like me, it has been a great experience!

4. Useful Links

[GSoC 2020|DeviceHandlerSOURCE|Robin] Device Handler development for the SOURCE project

My name is Robin Müller and I am an aerospace engineer doing my graduate studies at the University of Stuttgart (github: https://github.com/rmspacefish). I am also an active student in the small satellite society KSat, which is currently working on the cube sat project SOURCE. More information on this project can be read up on https://www.ksat-stuttgart.de/de/unsere-missionen/source/.

The domain of my work was embedded programming in C++. The most simple explanation of my work would be that I programmed the handler software for sensors and the on-board computer itself. The source code is located on the gitlab server of KSat (https://git.ksat-stuttgart.de/source/sourceobsw). The extensive README provides instructions how to make the Linux version of the software work and how to setup Eclipse properly to allow convenient microcontroller development. It should be noted that the device handlers were tested on a microcontroller with FreeRTOS as the operating system. It has been really fascinating to learn about different types of sensors and their interfaces. Of course, my work included more than just making a few sensors work with something like an Arduino.

A lot of work went into making microcontroller development as convenient as possible while also staying free and open-source. The used development environment is very much in the spirit of open-source: Eclipse was used as an IDE and the software for the target on-board computer is generated using the free ARM toolchain. The used framework is also open source and it is possible to compile the software for Linux as well (Microcontroller or Desktop). It is possible to integrate the functionalities of debugger probes like Segger J-Link (debugger probe not free unfortunately) or OpenOCD and the logging of a serial port into Eclipse. That way, the software can be developed without the need of various additional tools, which might not work on every OS (my personal philosophy: coding for microcontrollers should be (almost) as convenient as coding for Desktop applications). GNU Make is used as the build system for the software. A lot of work went into making the Makefiles readable to allow for easy tweaking. The project can be developed on Linux and on Windows, as long as the ARM toolchain is installed.

Fully integrated microcontroller development environment in Eclipse

I worked with a specific framework designed for small satellite missions called the Flight Software Framework (FSFW, public at https://egit.irs.uni-stuttgart.de/fsfw/fsfw). It was initially designed and created by the Institute of Space Systems (IRS) in Stuttgart for the mission Flying Laptop, which has been launched and is still operational. Using this framework saves a lot of work for small satellite mission software developers, for example by providing powerful abstraction layers for different operating systems, building blocks for common components like devices (sensors or other microcontrollers) and controllers (attitude or thermal controllers) and building blocks to enable telemetry and telecommand handling. Keeping the recent developments in space (New Space, Miniaturization, Cubesats..) in mind, there will propably be even more small satellites in the future and the need to shorten the development cycles for satellite software. The flight software framework is based on C++, which has become more common in the space sector recently. Still, a lot of (new) flight software is still based on C. A lot of the myths surrounding C++ in the context of embedded systems (code bloat, slow..) have been disproven and the language offers excellent tools to write safe code and to model the architecture of systems in the code, using the best capabilities of object oriented programming.

The device handlers I programmed are based on the FSFW template class DeviceHandlerBase. A template class (https://en.wikipedia.org/wiki/Template_method_pattern) takes care of a lot of generic code and expects the developer to implement abstract functions to model the unique device. There are certain common functions each (space) device handler needs:

1. Modes: Needed alter behaviour, for example some devices are off for certain satellite modes.
2. Health state: For example to perform restarts when necessary.
3. Commandability: It should be possible to command the device handler from Ground. The device should also be able to generate telemtry.
4. Communication Interface: The device needs to talk to the respective sensor or microcontroller, using a data bus like SPI or UART (e.g. RS232)
5. Power Switching: The device handler has to be able to turn a device off or on, using components of the power subsystem (EPS).

Implementing the template class properly is a lot more work than simply making the sensor work on something like a Raspberry Pi or an Arduino but there is a huge advantage of going through the work of implementing the template class. All of those important functions that were mentioned above are more or less taken care of, which avoids boilerplate code. It should be noted that the SOURCE project, which is only a 3U cubesat, contains 4 microcontrollers, two FPGAs and more than 40 sensors (well, 20+ of those are temperature sensor which use the same device handler of course..), so any way to save rewriting generic code is very convenient. Furthermore, the device handlers offer a powerful decoupling mechanism by moving the API calls to the used communication bus into a different class, which is passed to the device handler. The result is that the device handlers only include the logic to handle the devices while the task of calling communication drivers of the hardware is transferred to the communication interface. This is especially nice for devices which can communicate with mulitple communication buses or where the configuration of the used bus only differs slightly (other SPI slave select, different I2C address..).

In these difficult times, it is of course better to work for home. After procuring the hardware from the institute, there was the task of setting up the hardware. I focused on two device handlers in particular: The ThermalSensorHandler, which took care of handling a MAX31865 Resistance-to-Digial converter, which in turn was connected to a Pt1000 thermal resistor, and the GyroHandler which handled a BMG250 MEMS gyroscope. Both sensors were soldered on a housekeeping board engineering model (I’d like to thank Jens Polzin, who is designing this board!), which also contains sun sensors and SPI slave select expanders (decoders). The two following pictures show the set-up. The large board on the left is the AT91SAM9G20-EK development board, which has the same chip as the iOBC, which is the on-board computer of the SOURCE project.

General setup with the AT91SAM9G20-EK development board
Housekeeping board prototype (engineering model) with various sensors

The sensors are generally read and configured by reading certain registers, according to the sensors‘ datasheet. The basic test for the gyro involved taking the housekeeping board (HKB) and rotating it in both directions around every the X, Y and Z axis (kind of like a model airplane). It was also validated that the sensor values show the correct sign when rotating around a certain axis. The basic test for the thermal sensor handler included verifying the approximated temperature (room temperature) and checking whether it rises to 30-31 °C when touching the PT1000 sensor with my fingers .

Both device handlers have a start-up sequence which involves configuring the sensor and putting it into a state to perform poll the sensor properly. To perform all the initialization and configuration steps sequentially, an internal state machine is used. The usage of this state machine can be seen throughout the device handler code. The sequence of the device initialization is specified in the doStartUp() abstract function implementation. The specification of the actual commands for start-up, mode transitions or shut-down sequences is specified in the function buildTransitionDeviceCommand() (for simple sensors, this will usually only include the start-up sequence). When the configuration is complete, the device enters the MODE_NORMAL or MODE_ON mode and is ready to poll data. The commands for this nominal operation mode are specified inbuildNormalDeviceCommand(). buildCommandFromCommand() is used to specify commands from external commands (for example, commands coming from ground or from another software component) but is also used by the other command building functions to avoid duplicate code. The functions scanForReply() and interpretDeviceReply() are used to analyse the sensor data and store it into the local datapool for either downlink operation as housekeeping data or for usage by other software components.

On a microcontroller, the print functions generally have to be redirected to a UART peripheral to be sent to the host computer for display. This is used for the AT91. A sample output is show, which shows the two sensors being polled regularly.

Eclipse internal serial console, showing debug output from the AT91

I also started to work on a CoreController component, which takes care of monitoring the on-board computer itself. As a first step, I also took all necessary steps to enable communication with the iOBC on-board computer in the clean room of the IRS in Stuttgart. The iOBC engineering model (EM), being a rather expensive piece of hardware which is only available once, will be installed in the clean room and later be integrated into the flatsat, which is one of the most important testing platforms for the satellites and basically includes all the satellites component on a table wired together for testing. Of course, going to the clean room each time just to develop software is a lot of hassle. Therefore, remote development was set-up and is possible via Eclipse and RemoteGDB.

On-board computer iOBC engineering model in the clean room

The core controller will take care of monitoring the supervisor, which in turn generates voltage and temperature values of the OBC. It will also take care of monitoring all running tasks. Readers unfamiliar with embedded programming and real-time operating systems propably still understand the concept of threads, which are used extensively on desktop systems. Even though the OBC only has one core, it is possible to perform apparent multitasking by using a scheduler, which is the core component of a real-time operating system (RTOS). The most common ones for space applications among others are FreeRTOS, RTEMS and Linux. The FSFW offers abstraction layers for all of them and FreeRTOS was chosen for SOURCE because the provided driver functions by the OBC manufacturer also use FreeRTOS. The Controller uses the FreeRTOS API to monitor the stack usage of programs, and generate general CPU statistics and downlink them (in CSV format).

Task stats when printed out in debug mode

Another important task of the core controller is the scrubbing of non-volatile memories on the on-board computer. Space is a hostile environment, and the strong radiation can cause bit flips in the memories, which is also called Single-Event-Upset (SEU). Therefore, a lot of space-grade hardware features advanced error control code (ECC) to correct those anomalies. The OBC of SOURCE does not feature hardware ECC, but it is possible to implement software ECC, for example by using the Hamming Code (https://en.wikipedia.org/wiki/Hamming_code), which is able to correct one bitflip recognize two bitflips per 256 bytes. The hamming code will be generated on ground and written (or uploaded) to the non-volatile memory. It will then be used to regularly check the binaries in the non-volatile memories for bitflips. This task, which is called scrubbing, will also be performed by the core controller.

The complexity of the software is quite high. A schematic of the software architecture was created with the graph software yEd to visualize it. This software schematic is the most useful document to show the software architecture in a brief format which is also accessible for other subsystem and stakeholders which are interested in the success of the project. It also exists in similar form to visualize the architecture of the whole system (in therms of hardware).

Software Schematic for the SOURCE On-Board computer