Algorithms Exercises solved in Typescript, running with Jest testing suite. Developed with TDD.
Go to Install and run
This repository is part of a series that share and solve the same objectives, with the difference that each one is based on a different software ecosystem, depending on the chosen programming language:
-
For academic purposes, it is an backup of some algorithm exercises (with their solutions), proposed by various sources: leetcode, hackerrank, projecteuler, ...
-
The solutions must be written on "vanilla code", that is, avoiding as much as possible the use of external libraries (in runtime).
-
Adoption of methodology and good practices. Each exercise is implemented as a unit test set, using TDD (Test-driven Development) and Clean Code ideas.
Foundation of a project that supports:
- Explicit typing when the language supports it, even when it is not mandatory.
- Static Code Analysis (Lint) of code, scripts and documentation.
- Uniform Code Styling.
- Unit Test framework.
- Coverge collection. High coverage percentage. Equal or close to 100%.
- Pipeline (Github Actions). Each command must take care of its return status code.
- Docker-based workflow to replicate behavior in any environment.
- Other tools to support the reinforcement of software development good practices.
You can run tests in the following ways:
- Install and run directly require runtime tools installed in your SO.
- Install and run with make require runtime tools and "make" installed in your SO.
- Install and in Docker require Docker and docker-compose installed.
- (βοΈ) Install and in Docker with make require docker-compose and make installed.
βοΈ: Prefered way.
Using a NodeJS runtime in your SO. You must install dependencies:
npm install
Every problem is a function with unit test.
Unit test has test cases and input data to solve the problem.
Run all tests:
npm run test
You can change test running behaviour using some environment variables as follows:
Variable | Values | Default |
---|---|---|
LOG_LEVEL | debug , warning , error , info |
info |
BRUTEFORCE | true , false |
false |
LOG_LEVEL
: change verbosity level in outputs.BRUTEFORCE
: enable or disable running large tests. (long time, large amount of data, high memory consumition).
Run tests with debug outputs:
LOG_LEVEL=debug npm run test
Run brute-force tests with debug outputs:
BRUTEFORCE=true LOG_LEVEL=debug npm run test
make
tool is used to standardizes the commands for the same tasks
across each sibling repository.
Run tests (libraries are installed as dependency task in make):
make test
Run tests with debug outputs:
make test -e LOG_LEVEL=debug
Run brute-force tests with debug outputs:
make test -e BRUTEFORCE=true -e LOG_LEVEL=debug
Alternative way, use environment variables as prefix:
BRUTEFORCE=true LOG_LEVEL=debug make test
Build an image of the test stage. Then creates and ephemeral container an run tests.
BRUTEFORCE and LOG_LEVEL environment variables are passing from current environment using docker-compose.
docker-compose --profile testing run --rm algorithm-exercises-ts-test
To change behavior using environment variables, you can pass to containers in the following ways:
From host using docker-compose (compose.yaml) mechanism:
BRUTEFORCE=true LOG_LEVEL=debug docker-compose --profile testing run --rm algorithm-exercises-ts-test
Overriding docker CMD, as parameter of make "-e":
docker-compose --profile testing run --rm algorithm-exercises-ts-test make test -e LOG_LEVEL=DEBUG -e BRUTEFORCE=true
make compose/build
make compose/test
To pass environment variables you can use docker-compose or overriding CMD and passing to make as "-e" argument.
Passing environment variables using docker-compose (compose.yaml mechanism):
BRUTEFORCE=true LOG_LEVEL=debug make compose/test
Running container with development target. Designed for development workflow on top of this image. All source application is mounted as a volume in /app directory. Dependencies should be installed to run so, you must install dependencies before run (or after a dependency add/change).
# Build development target image
docker-compose build --compress algorithm-exercises-ts-dev
# run ephemeral container to install dependencies using docker runtime
# and store them in host directory (by bind-mount volume)
docker-compose run --rm algorithm-exercises-ts-dev npm install --verbose
# Run ephemeral container and override command to run test
docker-compose run --rm algorithm-exercises-ts-dev npm run test
Following command simulates a standarized pipeline across environments, using docker-compose and make.
make compose/build && make compose/lint && make compose/test && make compose/run
- Build all Docker stages and tag relevant images.
- Run static analysis (lint) checks
- Run unit tests
- Run a "final" production ready image as a final container. Final "production" image just shows a minimal "production ready" build (with no tests).
Developed with runtime:
node --version
v22.2.0
- Leetcode online platform for coding interview preparation.
- HackerRank competitive programming challenges for both consumers and businesses.
- Project Euler a series of computational problems intended to be solved with computer programs.
Use these answers to learn some tip and tricks for algorithms tests.
< 9FBB div class="markdown-heading" dir="auto">