Skip to content

thanasisn/training_location_analysis

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Analysis of training and location data.

Ingest data from different sources and formats into a portable "database". Create some methods to find duplicate data among records and do some quality checks and corrections. Do some analysis of the training data, about the fitness aspect of the data. Do some analysis and aggregation on the location data, for presence statistics and GIS applications.

This probably, will always be a work in progress.

Create a database

  • Include .csv files from smartphone logs
  • Include .hrv files from Polar
  • Include .json files from Garmin.
  • Include .tcx files from Polar
  • Include sqlite from Gadgetbridge
  • Include sqlite from Amazfitbip
  • Include sqlite from GarminDB
  • Include data from Google location service.
  • Include .fit files from Garmin.
  • Include .gpx files from other sources.
  • Include .json files from GoldenCheetah.
  • Database maintenance.
    • Check for duplicated records.
    • Check variables names similarity.
    • Create new vars automatically.
    • Remove db data from deleted files.
    • Remove db data from modified files.

Quality check of location data

  • Deduplicate points.
  • Remove errors in records.
  • Combine columns/variables.

Merge analysis from my other projects

Description

The main database collects all available data from the source files. The intent is to first aggregate as much data as possible, then to analyze the raw data, in order to find source files that we can delete or exclude from the main database. Also, by reading all the files we can detect file and formatting problems. The source files have been produced by different devices and have been processed by different software. We want to collect all the information gathered over a period of more than 10 years, so we expect more than 100 variables/columns and more than 20M records/rows. The processing scheme we try to implement should work with simple hardware specifications (8GB RAM or even less).

With further analysis, we can merge some of the variables, and check the data quality.

After we are confident about the data quality and the info in them, we can use the data to create other datasets we need.

Helpful and similar projects.

My database stats

fit gpx json
1180 1263 2277

Table: File types

fit gpx gz json zip
81 1259 456 2277 647

Table: Files extensions

Total rows: 34548721

Total files: 4720

Total days: 2845

Total vars: 159

DB Size: 1.1 GiB

Source Size: 3.5 GiB