Organize your photos & videos, chats & messages, location history, social media content, contacts, and more into a single cohesive timeline on your own computer where you can keep them alive forever.
Timelinize lets you import your data from practically anywhere: your computer, phone, online accounts, GPS-enabled radios, various apps and programs, contact lists, cameras, and more.
Join our Discord to discuss!
Note
I am looking for a better name for this project. If you have an idea for a good name that is short, relevant, unique, and available, I'd love to hear it!
These were captured using a dev repository of mine filled with a subset of my real data, so I've run Timelinize in obfuscation mode: images and videos are blurred (except profile pictures---need to fix that); names, identifiers, and locations around sensitive areas are all randomized, and text has been replaced with random words so that the string is about the same length.
(I hope to make a video tour soon.)
Please remember this is an early alpha preview, and the software is very much evolving and improving. And you can help!
- Obtain your data. This usually involves exporting your data from apps, online accounts, or devices. For example, requesting an archive from Google Takeout. (Apple iCloud, Facebook, Twitter/X, Strava, Instagram, etc. all offer similar features for GDPR compliance.) Do this early/soon, because some services take days to provide your data.
- Import your data using Timelinize. You don't need to extract or decompress .tar or .zip archives; Timelinize will attempt to recognize your data in its original format and folder structure. All the data you import is indexed in a SQLite database and stored on disk organized by date, without obfuscation or anything complicated.
- Explore and organize! Timelinize has a UI that portrays data using various projections and filters. It can recall moments from your past and help you view your life more comprehensively. (It's a great living family history tool.)
- Repeat steps 1-3 as often as desired. Timelinize will skip any existing data that is the same and only import new content. You could do this every few weeks or months for busy accounts that are most important to you.
Caution
Timelinize is in active development and is still considered unstable. The schema is still changing, necessitating starting over from a clean slate when updating. Always keep your original source data. Expect to delete and recreate your timelines as you upgrade during this alpha development period.
Important
Please ensure you have the necessary dependencies installed or Timelinize will not function properly.
After you have the system requirements installed, you can download and run Timelinize from the latest Release action. Click the most recent job and then choose the artifact at the bottom of the page that matches your platform.
Because of limitations in GitHub Actions, all artifacts get downloaded as .zip files even though the artifact is already compressed, so you may have to double-extract the download.
While Timelinize is in development, it's a good idea to start over with a new timeline repository every time you upgrade your build. The schema is still changing!
I recommend running from the command line even if you can double-click to run, so that you can see the log/error output. Logs are also available in your browser dev tools console.
docker run -p12002:12002 \
-v /path/to/repo:/repo \
-v /path/to/config:/app/.config/timelinize \
ghcr.io/timelinize/timelinize
That will run Timelinize on port 12002
, with the data repository mounted at /path/to/repo
(change it to suite your needs) and the configuration directory mounted at /path/to/config
(change it).
When using Docker bind mounts like above, make sure the directories exist on your host machine and that they belong to the user ID 1000.
Note
Because Timelinize is running inside a Docker container, it won't have access to your host's filesystem. You will need to mount the directories you want to access as volumes, to be able to load data into Timelinize.
Timelinize compiles for Windows, Mac, and Linux.
The Makefile installs all dependencies and cross compiles to a single binary in the .bin folder.
make all
make run
Although Timelinize is written in Go, advanced media-related features such as video transcoding and thumbnail generation (and in the future, indexing with on-device machine learning) are best done with external dependencies. When building from source, you need to make sure the development packages/versions of those dependencies are installed! Also, the latest version of Go is required.
Note that, on some platforms, the compilation dependencies may be different from the dependencies needed to run an already-built binary (for example, on Ubuntu you need libvips-dev
to compile, but on end user machines, you just need libvips
).
- Go (latest version; do not use Debian or Ubuntu package managers)
- ffmpeg (executable must be in PATH)
- libvips-dev
- Arch:
sudo pacman -S libvips
- Ubuntu:
sudo apt install -y libvips-dev
- macOS:
brew install libvips
- Arch:
- libheif (I think libheif is sometimes automatically installed when you install libvips)
- Ubuntu:
sudo add-apt-repository ppa:vpa1977/libheif && sudo apt update && sudo apt install libheif-dev libheif1
- Arch:
sudo pacman -S libheif
(if not already installed)
- Ubuntu:
This is the easiest way I have found to get the project compiling on Windows, but let me know if there's a better way.
- Make sure you don't already have MSYS2 installed and C:\msys64 does not exist.
- Install MSYS2: https://www.msys2.org/ - don't run after installing, since it likely brings up the wrong shell (UCRT; we want MINGW64 - yes, UCRT is recommended as it's more modern, but I don't feel confident that our dependencies are available as UCRT packages yet).
- Run the MSYS2 MINGW64 application (this is MSYS2's MINGW64 environment).
- Install mingw64 with relevant tools, and libvips, and libheif:
pacman -S --needed base-devel mingw-w64-x86_64-toolchain mingw-w64-x86_64-libvips mingw-w64-x86_64-libheif
- Go to Windows environment variables setting for your account, and make sure:
Path
hasC:\msys64\mingw64\bin
PKG_CONFIG_PATH
hasC:\msys64\mingw64\lib\pkgconfig
- Restart any running programs/terminals/shells, then run
gcc --version
to prove thatgcc
works.vips
andheif-*
commands should also work. It is likely that the libraries are also installed properly then too. - Running
go build
should then succeed, assuming the env variables above are set properly. You might need to setCGO_ENABLED=1
($env:CGO_ENABLED = 1
)
NOTE: Setting the CC
env var to the path of MSYS's MINGW64 gcc isn't sufficient if a different gcc
is in the PATH
. You will need to prepend the correct gcc folder to the PATH!
For compilation targeting the same platform (OS and architecture) as your dev machine, go build
should suffice.
Once you have the necessary dependencies installed, you can simply run go build
from the project folder:
$ go build
and a binary will be placed in the current directory.
Or, to start the server and open a web browser diretly:
$ go run main.go
To only start the server and not open a web browser:
$ go run main.go serve
The use of cgo makes cross-compilation a little tricky, but doable, thanks to zig
.
Mac is the only platform I know of that can cross-compile to all the other major platforms.
Make sure zig
is installed. This makes cross-compiling C/C++ a breeze.
To strip symbol tables and other debugging info, add -ldflags "-s -w"
to these go build
commands for a smaller binary. (This is not desirable for production builds.)
CGO_ENABLED=1 GOOS=linux GOARCH=amd64 CC="zig cc -target x86_64-linux" CXX="zig c++ -target x86_64-linux" go build
CGO_ENABLED=1 GOOS=linux GOARCH=arm64 CC="zig cc -target aarch64-linux" CXX="zig c++ -target aarch64-linux" go build
CGO_ENABLED=1 GOOS=windows GOARCH=amd64 CC="zig cc -target x86_64-windows" CXX="zig c++ -target x86_64-windows" go build
CGO_ENABLED=1 GOOS=windows CC="zig cc -target x86_64-windows" CXX="zig c++ -target x86_64-windows" go build
Timelinize has a symmetric HTTP API and CLI. When an HTTP API endpoint is created, it automatically adds to the command line as well.
Run timelinize help
(or go run main.go help
if you're running from source) to view the list of commands, which are also HTTP endpoints. JSON or form inputs are converted to command line args/flags that represent the JSON schema or form fields.
The motivation for this project is two-fold. Both press upon me with a sense of urgency, which is why I dedicated some nights and weekends to work on this.
-
Connecting with my family -- both living and deceased -- is important to me and my close relatives. But I wish we had more insights into the lives of those who came before us. What was important to them? Where did they live / travel / spend their time? What lessons did they learn? How did global and local events -- or heck, even the weather -- affect them? What hardships did they endure? What would they have wanted to remember? What would it be like to talk to them? A lot of this could not be known unless they wrote it all down. But these days, we have that data for ourselves. What better time than right now to start collecting personal histories from all available sources and develop a rich timeline of our life for our family, or even just for our own reference and nostalgia.
-
Our lives are better-documented than any before us, but the record of our life is more ephemeral than any before us, too. We lose control of our data by relying on centralized, proprietary apps and cloud services which are useful today, and gone tomorrow. I wrote Timelinize because now is the time to liberate my data from corporations who don't own it, yet who have the only copy of it. This reality has made me feel uneasy for years, and it's not going away soon. Timelinize makes it bearable.
Imagine being able to pull up a single screen with your data from any and all of your online accounts and services -- while offline. And there you see so many aspects of your life at a glance: your photos and videos, social media posts, locations on a map and how you got there, emails and letters, documents, health and physical activities, mental and emotional wellness, and maybe even music you listened to, for any given day. You can "zoom out" and get the big picture. Machine learning algorithms could suggest major clusters based on your content to summarize your days, months, or years, and from that, even recommend printing physical memorabilia. It's like a highly-detailed, automated journal, fully in your control, which you can add to in the app: augment it with your own thoughts like a regular journal.
Then cross-reference your own timeline with a global public timeline: see how locations you went to changed over time, or what major news events may have affected you, or what the political/social climate -- or the literal climate -- was like at the time. For example, you may wonder, "Why did the family stay inside so much of the summer one year?" You could then see, "Oh, because it was 110 F (43 C) degrees for two months straight."
Or translate the projection sideways, and instead of looking at time cross-sections, look at cross-sections of your timeline by media type: photos, posts, location, sentiment. Look at plots, charts, graphs, of your physical activity.
Or view projections by space instead of time: view interrelations between items on a map, even items that don't have location data, because the database is entity-aware. So if a person receives a text message and the same person has location information at about the same time from a photo or GPS device, then the text message can appear on a map too, reminding you where you first got the text with the news about your nephew's birth.
And all of this runs on your own computer: no one else has access to it, no one else owns it, but you.
And if everyone had their own timeline, in theory they could be merged into a global supertimeline to become a thorough record of the human race, all without the need for centralizing our data on cloud services that are controlled by greedy corporations.
I've been working on this project since about 2013, even before I conceptualized Caddy. My initial vision was to create an automated backup of my Picasa albums that I could store on my own hard drive. This project was called Photobak. Picasa eventually became Google Photos, and about the same time I realized I wanted to backup my photos posted to Facebook, Instagram, and Twitter, too. And while I was at it, why not include my Google Location History to augment the location data from the photos. The vision continued to expand as I realized that my family could use this too, so the schema was upgraded to support multiple people/entities as well. This could allow us to merge databases, or timelines, as family members pass, or as they share parts of their timeline around with each other. Timelinize is the mature evolution of the original project that is now designed to be a comprehensive, highly detailed archive of one's life through digital (or digitized) content. An authoritative, unified record that is easy to preserve and organize.
This project is licensed with AGPL. I chose this license because I do not want others to make proprietary or commercial software using this package. The point of this project is liberation of and control over one's own, personal data, and I want to ensure that this project won't be used in anything that would perpetuate the walled garden dilemma we already face today. Even if the future of this project ever has proprietary source code, I can ensure it will stay aligned with my values and the project's original goals.