8000 samples2 · my-lambda-projects/Lambda Wiki · GitHub
[go: up one dir, main page]

Skip to content

samples2

Bryan C Guner edited this page May 2, 2021 · 1 revision
1-projects/Python-OOP-Toy-master/

Table of Contents generated with DocToc

Note for Windows users: WSL won't work for this module!

Overview

"Object-oriented programming (OOP) is a programming paradigm based on the concept of "objects", which may contain data, in the form of fields, often known as attributes; and code, in the form of procedures, often known as methods. A feature of objects is that an object's procedures can access and often modify the data fields of the object with which they are associated (objects have a notion of "this" or "self"). In OOP, computer programs are designed by making them out of objects that interact with one another.[1][2]" --Wikipedia

In English, this means that in OOP, code is organized in logical and self contained parts that contain within them everything needed to create, store, and manipulate one very specific element of the program. When this element is needed, a copy of it is initialized according to the instructions within. This is called an object.

As with all things programming, the specific vocabulary varies from language to language, or even programmer to programmer. Some Python vocabulary:

Class: The top level organization structure in OOP. This contains all of the instructions and storage for the operations of this part of the program. A class should be self contained and all variables within the class should only be modified by methods within the class.

Method: A function that belongs to a specific class.

Constructor: A special method, defined with init() that is used to instantiate an object of this class.

Inheritance: Perhaps the most important concept in OOP, a class may inherit from another class. This gives the child class all of the variables and methods found in the parent class, or classes, automatically.

Override: If a child class needs to function slightly differently than objects of the parent class, this can be done by giving the child class a method with the same name as one found in the parent. This method will override the one defined in the parent class. Often, this is done to add child specific functionality to the method before calling the parent version of the method using super().foo(). This is commonly done with the init() method.

Self: In Python, a class refers to class-level variables and methods with the keyword self. These have scope across the entire class. Variables may also be declared normally and will have scope limited to the block of code they are declared within.

Python OOP Toy

This project will demonstrate the core concepts of OOP by using a library called pygame to create a toy similar to early screensavers.

For initial setup, run:

pipenv install
pipenv shell

Then to run, use: python src/draw.py

In-class Demo

Your instructor will demonstrate the above concepts by extending the Block class

Project Work

Fill out the stubs in ball.py to extend the functionality of the ball class.

Stretch Goals

Implement simple physics to enable balls to bounce off of one another, or off of blocks. This will be HARD. If you get it 'sort of working' in any form, consider yourself to have accomplished an impressive feat!

Troubleshooting

Windows

  • If pipenv install is taking forever or erroring with TIMEOUT messages, disable your antivirus software.
  • If pipenv install is puking on installing pygame:
    • Don't use pipenv for this project. No install, no shell.
    • Download the appropriate .whl file from here.
      • Python 3.6 use the cp36 version. Python 3.7 use cp37, etc. Use python --version to check your version.
      • Try the win32 version first. If that doesn't work, the AMD version.
      • E.g. pygame‑1.9.3‑cp36‑cp36m‑win32.whl
    • Install it with
      pip install pygame-[whatever].whl
      
      You'll need to specify the full path, likely.
    • Once it's installed, run the game from the src/ directory with
      python draw.py
      

Mac

  • If you're getting errors about InvalidMarker:
    • Don't use pipenv for this project. No install, no shell.
    • Run pip3 install pygame
    • Once it's installed, run the game from the src/ directory with
      python3 draw.py
      
1-projects/React-Todo-Solution-master/

Table of Contents generated with DocToc

All Answers to Partner Study should be filled out in this file.

  1. Single Page Application

  2. Compilers

  3. Bundlers

  4. Elements

  5. Components

  6. JSX

  7. Package Mangers

  8. CDN

  9. Props and State

Table of Contents generated with DocToc

React-Todo

Other Useful Resources

Battle Plan

  • Objective: At this point you have become familiar with the DOM and have built out User Interfaces HTML and CSS and some custom components. Now we're going to dive into modern front-end JavaScript development by learning about ReactJS.

  • You're going to be building a ToDo App (please hold your applause).

  • We know this may seem trivial, but the best part about this assignment is that is shows off some of the strengths of React and you can also take it as far as want so don't hold back on being creative.

  • Tool requirements

    • React Dev Tools - This is a MUST you need to install this asap!
    • We have everything you need for your React Developer environment in this file. We went over this in the lecture video.

To Get Started

You'll need to make sure you have the following installed.
  • node and npm
  • npm install will pull in all the node_modules you need once you cd into the root directory of the project
  • npm start will start a development server on your localhost at port 3000.
  • npm test will run the tests that are included in the the project. Try to get as many of these passing as you can in the allotted time.

How to Tackle this Project

Your job is to write the components to complete the Todo List application and getting as many of the tests to pass as you can. The tests are expecting that you have a TodoList component that renders a Todo component for each todo item. The requirements for your Todo List app is that it should have an input field that a user can type text into and submit data in the input field in order to create a new todo item. Aside from being able to add todos, you should be able to mark any todo in the list as 'complete'. In other words, a user should be able to click on any of the todos in the list and have a strikethrough go through the individual todo. This behavior should be toggle-able, i.e. a todo item that has a strikethrough through it should still be clickable in order to allow completed items to no longer be marked as 'completed'. Once you've finished your components, you'll need to have the root App component render your TodoList component.

Tips to Keep in Mind

  • All components you implement should go in the src/components directory.
  • The components should be named App.js, TodoList.js and Todo.js (as those are the files being imported into the tests).
  • Think of your application as an Application Tree. App is the parent, which controlls properties/data needed for the child components. This is how modern applications are built. They're modular, separate pieces of code called components that you 'compose' together to make your app. It's awesome!
  • Be sure to keep your todos in an array on state. Arrays are so awesome to work with.
  • When you need to iterate over a list and return React components out as elements, you'll need to include a "key" property on the element itself. <ElementBeingRendered key={someValue} />. Note: this is what react is doing under the hood, it needs to know how to access each element and they need to be unique so the React engine can do its thing. An example snippet that showcases this may look something like this:
this.state.todos.map((todo, i) => <AnotherComponent key={i} todo={todo} />);

Here, we're simply passing the index of each todo item as the key for the individual React component.

  • Feel free to structure your "todo" data however you'd like. i.e. strings, objects etc.
  • React will gi 8000 ve you warnings in the console that urge you to squash React Anti-Patterns. But if something is completely off, you'll get stack trace errors that will force your bundle to freeze up. You can look for these in the Chrome console.

Stretch Problems

  • Refactor each todo to be an object instead of a just a string. For example, todo: {'text': 'Shop for food, 'completed': false} and when a user clicks on a todo, switch that completed flag to true. If completed === true, this should toggle the strikethrough on the 'completed' todo. The toggling functionality should work the same as when each todo was just a string.
  • Add the ability to delete a todo. The way this would work is each todo item should have an 'x' that should be clickable and that, when clicked, should remove the todo item from the state array, which will also remove it from the rendered list of todos.
  • Take your App's styles to the next level. Start implementing as much creativity here as you'd like. You can build out your styles on a component-by-component basis eg App.js has a file next to it in the directory tree called App.scss and you define all your styles in that file. Be sure to @import these styles into the index.scss file.
  • Persist your data in window.localStorage() hint: you may have to pass your data to a stringifier to get it to live inside the localStorage() of the browser. This will cause it to persist past the page refresh.
1-projects/Relational-Databases-master/

Table of Contents generated with DocToc

Relational Databases and PostgreSQL

Contents

What is a relational database?

Data stored as row records in tables. Imagine a spreadsheet with column headers describing the contents of each column, and each row is a record.

A database can contain many tables. A table can contain many rows. A row can contain many columns.

Records are related to those in different tables through common columns that are present in both tables.

For example, an Employee table might have the following columns in each record:

Employee
    EmployeeID  FirstName  LastName  DepartmentID

And a Department table might have the following columns in each record:

Department
    DepartmentID  DepartmentName

Notice that both Employee and Department have a DepartmentID column. This common column relates the two tables and can be used to join them together with a query.

The structure described by the table definitions is known as the schema.

Compare to NoSQL databases that work with key/value pairs or are document stores.

Relational vs NoSQL

NoSQL is a term that refers to non-relational databases, most usually document store databases. (Though it can apply to almost any kind of non-relational database.)

MongoDB is a great example of a NoSQL database.

When Do You Use NoSQL Versus a Relational Database?

Unfortunately, there are no definitive rules on when to choose one or the other.

Do you need ACID-compliance? Consider a relational database.

Does your schema (structure of data) change frequently? Consider NoSQL.

Does absolute consistency in your data matter, e.g. a bank, inventory management system, employee management, academic records, etc.? Consider a relational database.

Do you need easy-to-deploy high-availability? Consider NoSQL.

Do you need transactions to happen atomically? (The ability to update multiple records simultaneously?) Consider a relational database.

Do you need read-only access to piles of data? Consider NoSQL.

PostgreSQL

PostgreSQL is a venerable relational database that is freely available and world-class.

https://www.postgresql.org/

SQL, Structured Query Language

SQL ("sequel") is the language that people use for interfacing with relational databases.

Create a table with CREATE TABLE

A database is made up of a number of tables. Let's create a table using SQL in the shell. Be sure to end the command with a semicolon ;.

(Note: SQL commands are often capitalized by convention, but can be lowercase.)

$ psql
psql (10.1)
Type "help" for help.

dbname=> CREATE TABLE Employee (ID INT, LastName VARCHAR(20));

Use the \dt command to show which tables exist:

dbname=> CREATE TABLE Employee (ID INT, LastName VARCHAR(20));
CREATE TABLE
dbname=> \dt
        List of relations
Schema |   Name   | Type  | Owner 
--------+----------+-------+-------
public | employee | table | beej
(1 row)

Use the \d command to see what columns a table has:

dbname=> \d Employee
                        Table "public.employee"
    Column    |         Type          | Collation | Nullable | Default 
--------------+-----------------------+-----------+----------+---------
 id           | integer               |           |          | 
 lastname     | character varying(20) |           |          | 

Create a row with INSERT

dbname=> INSERT INTO Employee (ID, LastName) VALUES (10, 'Tanngnjostr');
INSERT 0 1

You can omit the column names if you're putting data in every column:

dbname=> INSERT INTO Employee VALUES (10, 'Tanngnjostr');
INSERT 0 1

Run some more inserts into the table:

INSERT INTO Employee VALUES (11, 'Alice');
INSERT INTO Employee VALUES (12, 'Bob');
INSERT INTO Employee VALUES (13, 'Charlie');
INSERT INTO Employee VALUES (14, 'Dave');
INSERT INTO Employee VALUES (15, 'Eve');

Read rows with SELECT

You can query the table with SELECT.

Query all the rows and columnts:

dbname=> SELECT * FROM Employee;
 id |  lastname   
----+-------------
 10 | Tanngnjostr
 11 | Alice
 12 | Bob
 13 | Charlie
 14 | Dave
 15 | Eve
(6 rows)

With SELECT, * means "all columns".

You can choose specific columns:

dbname=> SELECT LastName FROM Employee;
  lastname   
-------------
 Tanngnjostr
 Alice
 Bob
 Charlie
 Dave
 Eve
(6 rows)

And you can search for specific rows with the WHERE clause:

dbname=> SELECT * FROM Employee WHERE ID=12;
 id | lastname 
----+----------
 12 | Bob
(1 row)

dbname=> SELECT * FROM Employee WHERE ID=14 OR LastName='Bob';
 id | lastname 
----+----------
 12 | Bob
 14 | Dave
(2 rows)

Finally, you can rename the output columns, if you wish:

SELECT id AS Employee ID, LastName AS Name
    FROM Employee
    WHERE ID=14 OR LastName='Bob';
    
 Employee ID | Name 
-------------+----------
     12      | Bob
     14      | Dave

Update rows with UPDATE

The UPDATE command can update one or many rows. Restrict which rows are updated with a WHERE clause.`

dbname=> UPDATE Employee SET LastName='Harvey' WHERE ID=10;
UPDATE 1

dbname=> SELECT * FROM Employee WHERE ID=10;
 id | lastname 
----+----------
 10 | Harvey
(1 row)

You can update multiple columns at once:

dbname=> UPDATE Employee SET LastName='Octothorpe', ID=99 WHERE ID=14;
UPDATE 1

Delete rows with DELETE

Delete from a table with the DELETE command. Use a WHERE clause to restrict the delete.

CAUTION! If you don't use a WHERE clause, all rows will be deleted from the table!

Delete some rows:

dbname=> DELETE FROM Employee WHERE ID >= 15;
DELETE 2

Delete ALL rows (Danger, Will Robinson!):

dbname=> DELETE FROM Employee;
DELETE 4

Deleting entire tables with DROP

If you want to get rid of an entire table, use DROP.

WARNING! There is no going back. Table will be completely blown away. Destroyed ...by the Empire.

dbname=> DROP TABLE Employee;
DROP TABLE

The WHERE Clause

You've already seen some examples of how WHERE affects SELECT, UPDATE, and DELETE.

Normal operators like <, >, =, <=, >= are available.

For example:

SELECT * from animals
    WHERE age >= 10;

AND, OR, and Parentheses

You can add more boolean logic with AND, OR, and affect precedence with parentheses:

SELECT * from animals
    WHERE age >= 10 AND type = 'goat';
SELECT * from animals
    WHERE age >= 10 AND (type = 'goat' OR type = 'antelope');

LIKE

The LIKE operator can be used to do pattern matching.

_   -- Match any single character
%   -- Match any sequence of characters

To select all animals that start with ab:

SELECT * from animal
    WHERE name LIKE 'ab%';

Column Data Types

You probably noticed a few data types we specified with CREATE TABLE, above. PostgreSQL has a lot of data types.

This is an incomplete list of some of the more common types:

VARCHAR(n)   -- Variable character string of max length n
BOOLEAN      -- TRUE or FALSE
INTEGER      -- Integer value
INT          -- Same as INTEGER
DECIMAL(p,s) -- Decimal number with p digits of precision
             -- and s digits right of the decimal point
REAL         -- Floating point number
DATE         -- Holds a date
TIME         -- Holds a time
TIMESTAMP    -- Holds an instant of time (date and time)
BLOB         -- Binary object

ACID and CRUD

These are two common database terms.

ACID

Short for Atomicity, Consistency, Isolation, Durability. When people mention "ACID-compliance", they're generally talking about the ability of the database to accurately record transactions in the case of crash or power failure.

Atomicity: all transactions will be "all or nothing".

Consistency: all transactions will leave the database in a consistent state with all its defined rules and constraints.

Isonlation: the results of concurrent transactions is the same as if those transactions had been executed sequentially.

Durability: Once a transaction is committed, it will remain committed, despite crashes, power outages, snow, and sleet.

CRUD

Short for Create, Read, Update, Delete. Describes the four basic functions of a data store.

In a relational database, these functions are handled by INSERT, SELECT, UPDATE, and DELETE.

NULL and NOT NULL

Columns in records can sometimes have no data, referred to by the special keyword as NULL. Sometimes it makes sense to have NULL columns, and sometimes it doesn't.

If you explicitly want to disallow NULL columns in your table, you can create the columns with the NOT NULL constraint:

CREATE TABLE Employee (
    ID INT NOT NULL,
    LastName VARCHAR(20));

COUNT

You can select a count of items in question with the COUNT operator.

For example, count the rows filtered by the WHERE clause:

SELECT COUNT(*) FROM Animals WHERE legcount >= 4;

 count 
-------
     5

Useful with GROUP BY, below.

ORDER BY

ORDER BY which sorts SELECT results for you. Use DESC to sort in reverse order.

SELECT * FROM Pets
ORDER BY age DESC;

  name     | age 
-----------+-----
 Rover     |   9
 Zaphod    |   4
 Mittens   |   3

GROUP BY

When used with an aggregating function like COUNT, can be used to produce groups of results.

Count all the customers in certain countries:

SELECT COUNT(CustomerID), Country
    FROM Customers
    GROUP BY Country;

  COUNT(CustomerID)   |  Country 
----------------------+-----------
      1123            |    USA
       734            |    Germany
                     etc.

Keys: Primary, Foreign, and Composite

Primary Key

Rows in a table often have one column that is called the primary key. The value in this column applies to all the rest of the data in the record. For example, an EmployeeID would be a great primary key, assuming the rest of the record held employee information.

Employee
    ID (Primary Key)  LastName  FirstName  DepartmentID

To create a table and specify the primary key, use the NOT NULL and PRIMARY KEY constraints:

CREATE TABLE Employee (
    ID INT NOT NULL PRIMARY KEY,
    LastName VARCHAR(20),
    FirstName VARCHAR(20),
    DepartmentID INT);

You can always search quickly by primary key.

Foreign Keys

If a key refers to a primary key in another table, it is called a foreign key (abbreviated "FK"). You are not allowed to make changes to the database that would cause the foreign key to refer to a non-existent record.

The database uses this to maintain referential integrity.

Create a foreign key using the REFERENCES constraint. It specifies the remote table and column the key refers to.

CREATE TABLE Department (
    ID INT NOT NULL PRIMARY KEY,
    Name VARCHAR(20));

CREATE TABLE Employee (
    ID INT NOT NULL PRIMARY KEY,
    LastName VARCHAR(20),
    FirstName VARCHAR(20),
    DepartmentID INT REFERENCES Department(ID));

In the above example, you cannot add a row to Employee until that DepartmentID already exists in Department's ID.

Also, you cannot delete a row from Department if that row's ID was a DepartmentID in Employee.

Composite Keys

Keys can also consist of more than one column. Composite keys can be created as follows:

CREATE TABLE example (
    a INT,
    b INT,
    c INT,
    PRIMARY KEY (a, c));

Auto-increment Columns

These are columns that the database manages, usually in an ever-increasing sequence. It's perfect for generation unique, numeric IDs for primary Keys.

In some databases (e.g MySQL) this is done with an AUTO_INCREMENT keyword. PostgreSQL is different.

In PostgreSQL, use the SERIAL keyword to auto-generate sequential numeric IDs for records.

CREATE TABLE Company (
    ID SERIAL PRIMARY KEY,
    Name VARCHAR(20));

When you insert, do not specify the ID column. You must however, give a column name list that includes the remaining column names you are inserting data for. The ID column will be automatically generated by the database.

INSERT INTO Company (Name) VALUES ('My Awesome Company');

Joins

This concept is extremely important to understanding how to use relational databases!

When you have two (or more) tables with data you wish to retrieve from both, you do so by using a join. These come in a number of varieties, some of which are covered here.

When you're using SELECT to make the join between two tables, you can specify the tables specific columns are from by using the . operator. This is especially useful when columns have the same name in the different tables:

SELECT Animal.name, Farm.name
    FROM Animal, Farm
    WHERE Animal.FarmID = Farm.ID;

Tables to use in these examples:

CREATE TABLE Department (
    ID INT NOT NULL PRIMARY KEY,
    Name VARCHAR(20));

CREATE TABLE Employee (
    ID INT NOT NULL PRIMARY KEY,
    Name VARCHAR(20),
    DepartmentID INT);

INSERT INTO Department VALUES (10, 'Marketing');
INSERT INTO Department VALUES (11, 'Sales');
INSERT INTO Department VALUES (12, 'Entertainment');

INSERT INTO Employee VALUES (1, 'Alice', 10);
INSERT INTO Employee VALUES (2, 'Bob', 12);
INSERT INTO Employee VALUES (3, 'Charlie', 99);

NOTE: Importantly, department ID 11 is not referred to from Employee, and department ID 99 (Charlie) does not exist in Department. This is instrumental in the following examples.

Inner Join, The Most Common Join

This is the most commonly-used join, by far, and is what people mean when they just say "join" with no further qualifiers.

This will return only the rows that match the requirements from both tables.

For example, we don't see "Sales" or "Charlie" in the join because neither of them match up to the other table:

dbname=> SELECT Employee.ID, Employee.Name, Department.Name
             FROM Employee, Department
             WHERE Employee.DepartmentID = Department.ID;

 id | name  |     name      
----+-------+---------------
  1 | Alice | Marketing
  2 | Bob   | Entertainment
(2 rows)

Above, we used a WHERE clause to perform the inner join. This is absolutely the most common way to do it.

There is an alternative syntax, below, that is barely ever used.

dbname=> SELECT Employee.ID, Employee.Name, Department.Name
             FROM Employee INNER JOIN Department
             ON Employee.DepartmentID = Department.ID;

 id | name  |     name      
----+-------+---------------
  1 | Alice | Marketing
  2 | Bob   | Entertainment
(2 rows)

Left Outer Join

This join works like an inner join, but also returns all the rows from the "left" table (the one after the FROM clause). It puts NULL in for the missing values in the "right" table (the one after the LEFT JOIN clause.)

Example:

dbname=> SELECT Employee.ID, Employee.Name, Department.Name
             FROM Employee LEFT JOIN Department
             ON Employee.DepartmentID = Department.ID;

 id |  name   |     name      
----+---------+---------------
  1 | Alice   | Marketing
  2 | Bob     | Entertainment
  3 | Charlie | 
(3 rows)

Notice that even though Charlie's department isn't found in Department, his record is still listed with a NULL department name.

Right Outer Join

This join works like an inner join, but also returns all the rows from the "right" table (the one after the RIGHT JOIN clause). It puts NULL in for the missing values in the "right" table (the one after the FROM clause.)

dbname=> SELECT Employee.ID, Employee.Name, Department.Name
             FROM Employee RIGHT JOIN Department
             ON Employee.DepartmentID = Department.ID;

 id | name  |     name      
----+-------+---------------
  1 | Alice | Marketing
  2 | Bob   | Entertainment
    |       | Sales
(3 rows)

Notice that even though there are no employees in the Sales department, the Sales name is listed with a NULL employee name.

Full Outer Join

This is a blend of a Left and Right Outer Join. All information from both tables is selected, with NULL filling the gaps where necessary.

 dbname=> SELECT Employee.ID, Employee.Name, Department.Name
              FROM Employee
              FULL JOIN Department
              ON Employee.DepartmentID = Department.ID;
            
 id |  name   |     name      
----+---------+---------------
  1 | Alice   | Marketing
  2 | Bob     | Entertainment
  3 | Charlie | 
    |         | Sales
(4 rows)

Indexes

When searching through tables, you use a WHERE clause to narrow things down. For speed, the columns mentioned in the WHERE clause should either be a primary key, or a column for which an index has been built.

Indexes help speed searches. In a large table, searching over an unindexed column will be slow.

Example of creating an index on the Employee table from the Keys section:

dbname=> CREATE INDEX ON Employee (LastName);
CREATE INDEX

dbname=> \d Employee
                        Table "public.employee"
    Column    |         Type          | Collation | Nullable | Default 
--------------+-----------------------+-----------+----------+---------
 id           | integer               |           | not null | 
 lastname     | character varying(20) |           |          | 
 firstname    | character varying(20) |           |          | 
 departmentid | integer               |           |          | 
Indexes:
    "employee_pkey" PRIMARY KEY, btree (id)
    "employee_lastname_idx" btree (lastname)
Foreign-key constraints:
    "employee_departmentid_fkey" FOREIGN KEY (departmentid) REFERENCES department(id)

Transactions

In PostgreSQL, you can bundle a series of statements into a transaction. The transaction is executed atomically, which means either the entire transaction occurs, or none of the transaction occurs. There will never be a case where a transaction partially occurs.

Create a transaction by starting with a BEGIN statement, followed by all the statements that are to be within the transaction.

START TRANSACTION is generally synonymous with BEGIN in SQL.

To execute the transaction ("Let's do it!"), end with a COMMIT statement.

To abort the transaction and do nothing ("On second thought, nevermind!") end with a ROLLBACK statement. This makes it like nothing within the transaction ever happened.

Usually transactions happen within a program that checks for sanity and either commits or rolls back.

Pseudocode making DB calls that check if a rollback is necessary:

db("BEGIN"); // Begin transaction

db(`UPDATE accounts SET balance = balance - 100.00
    WHERE name = 'Alice'`);

let balance = db("SELECT balance WHERE name = 'Alice'");

// Don't let the balance go below zero:
if (balance < 0) {
    db("ROLLBACK"); // Never mind!! Roll it all back.
} else {
    db("COMMIT"); // Plenty of cash
}

In the above example, the UPDATE and SELECT must happen at the same time (atomically) or else another process could sneak in between and withdraw too much money. Because it needs to be atomic, it's wrapped in a transaction.

If you just enter a single SQL statement that is not inside a BEGIN transaction block, it gets automatically wrapped in a BEGIN/COMMIT block. It is a mini transaction that is COMMITted immediately.

Not all SQL databases support transactions, but most do.

The EXPLAIN Command

The EXPLAIN command will tell you how much time the database is spending doing a query, and what it's doing in that time.

It's a powerful command that can help tell you where you need to add indexes, change structure, or rewrite queries.

dbname=> EXPLAIN SELECT * FROM foo;

                       QUERY PLAN
---------------------------------------------------------
 Seq Scan on foo  (cost=0.00..155.00 rows=10000 width=4)
(1 row)

For more information, see the PostgreSQL EXPLAIN documentation

Quick and Dirty DB Design

Designing a non-trivial database is a difficult, learned skill best left to professionals. Feel free to do small databases with minimal training, but if you get in a professional situation with a large database that needs to be designed, you should consult with people with strong domain knowledge.

That said, here are a couple pointers.

  • In general, all your tables should have a unique PRIMARY KEY for each row. It's common to use SERIAL or AUTO_INCREMENT to make this happen.

  • Keep an eye out for commonly duplicated data. If you are duplicating text data across several records, consider that maybe it should be in its own table and referred to with a foreign key.

  • Watch out for unrelated data in the same record. If it's a record in the Employee table but it has Department_Address as a column, that probably belongs in a Department table, referred to by a public key.

But if you really want to design database, read on to the Normalization and Normal Forms section.

Normalization and Normal Forms

[This topic is very deep and this section cannot do it full justice.]

Normalization is the process of designing or refactoring your tables for maximum consistency and minimum redundancy.

With NoSQL databases, we're used to denormalized data that is stored with speed in mind, and not so much consistency (sometimes NoSQL databases talk about eventual consistency).

Non-normalized tables are considered an anti-pattern in relational databases.

There are many normal forms. We'll talk about First, Second, and Third normal forms.

Anomalies

One of the reasons for normalizing tables is to avoid anomalies.

Insert anomaly: When we cannot insert a row into the table because some of the dependent information is not yet known. For example, we cannot create a new class record in the school database, because the record requires at least one student, and none have enrolled yet.

Update anomaly: When information is duplicated in the database and some rows are updated but not others. For example, say a record contains a city and a zipcode, but then the post office changes the zipcode. If some of the records are updated but not others, some cities will have the old zipcodes.

Delete anomaly: The opposite of an insert anomaly. When we delete some information and other related information must also be deleted against our will. For example, deleting the last student from a course causes the other course information to be also deleted.

By normalizing your tables, you can avoid these anomalies.

First Normal Form (1NF)

When a database is in first normal form, there is a primary key for each row, and there are no repeating sets of columns that should be in their own table.

Unnormalized (column titles on separate lines for clarity):

Farm
    ID
    AnimalName1  AnimalBreed1  AnimalProducesEggs1
    AnimalName2  AnimalBreed2  AnimalProducesEggs2

1NF:

Farm
    ID

Animal
    ID  FarmID[FK Farm(ID)]  Name  Breed  ProducesEggs

Use a join to select all the animals in the farm:

SELECT Name, Farm.ID FROM Animal, Farm WHERE Farm.ID = Animal.FarmID;

Second Normal Form (2NF)

To be in 2NF, a table must already be in 1NF.

Additionally, all non-key data must fully relate to the key data in the table.

In the farm example, above, Animal has a Name and a key FarmID, but these two pieces of information are not related.

We can fix this by adding a table to link the other two tables together:

2NF:

Farm
    ID

FarmAnimal
    FarmID[FK Farm(ID)]  AnimalID[FK Animal(ID)]

Animal
    ID  Name  Breed  ProducesEggs

Use a join to select all the animals in the farms:

SELECT Name, Farm.ID
    FROM Animal, FarmAnimal, Farm
    WHERE Farm.ID = FarmAnimal.FarmID AND
          Animal.ID = FarmAnimal.AnimalID;

Third Normal Form (3NF)

A table in 3NF must already be in 2NF.

Additionally, columns that relate to each other AND to the key need to be moved into their own tables. This is known as removing transitive dependencies.

In the Farm example, the columns Breed a 8000 nd ProducesEggs are related. If you know the breed, you automatically know if it produces eggs or not.

3NF:

Farm
    ID

FarmAnimal
    FarmID[FK Farm(ID)]  AnimalID[FK Animal(ID)]

BreedEggs
    Breed  ProducesEggs

Animal
    ID  Name  Breed[FK BreedEggs(Breed)]

Use a join to select all the animals names that produce eggs in the farm:

SELECT Name, Farm.ID
    FROM Animal, FarmAnimal, BreedEggs, Farm
    WHERE Farm.ID = FarmAnimal.FarmID AND
          Animal.ID = FarmAnimal.AnimalID AND
          Animal.Breed = BreedEggs.Breed AND
          BreedEggs.ProducesEggs = TRUE;

More reading:

Node-Postgres

This is a library that allows you to interface with PostgreSQL through NodeJS.

Its documentation is exceptionally good.

Assignments

Security

PostgreSQL Password

You might have noticed that you don't need a password to access your database that you created. This is because PostgreSQL by default uses something called peer authentication mode.

In a nutshell, it makes sure that you are logged in as yourself before you access your database. If a different user tries to access your database, they will be denied.

If you need to set up password access, see client authentication in the PostgreSQL manual

Writing Client Software

When writing code that accesses databases, there are a few rules you should follow to keep things safe.

  • Don't store database passwords or other sensitive information in your code repository. Store dummy credentials instead.

  • When building SQL queries in code, use parameterized queries. You build your query with parameter placeholders for where the query arguments will go.

    This is your number-one line of defense against SQL injection attacks.

    It's a seriously noob move to not use parameterized queries.

Other Relational Databases

There are tons of them by Microsoft, Oracle, etc. etc.

Other popular open source databases in widespread use are:

  • MySQL Multi-user, industrial class.
  • SQLite Single-user, very fast, good for config files.

Assignment: Install PostgreSQL

IMPORTANT! These instructions assume you haven't already installed PostgreSQL. If you have already installed it, skip this section or Google for how to upgrade your installation.

Mac with Homebrew

  1. Open a terminal

  2. Install PostgreSQL: brew install postgresql

    If you get install errors at this point relating to the link phase failing or missing permissions, look back in the output and see what file it failed to write.

    For example, if it's failing to write something in /usr/local/share/man-something, try setting the ownership on those directories to yourself.

    Example (from the command line):

    $ sudo chown -R $(whoami) /usr/local/share/man

    Then try to install again.

  3. Start the database process

    • If you want to start it every time you log in, run:

      brew services start postgresql
      
    • If you want to just start it one time right now, run:

      pg_ctl -D /usr/local/var/postgres start
      
  4. Create a database named the same as your username: createdb $(whoami)

    • Optionally you can call it anything you want, but the shell defaults to looking for a database named the same as your user.

    This database will contain tables.

Then start a shell by running psql and see if it works. You should see this prompt:

$ psql
psql (10.1)
Type "help" for help.

dbname=> 

(Use psql databasename if you created the database under something other than your username.)

Use \l to get a list of databases.

You can enter \q to exit the shell.

Windows

Reports are that one of the easiest installs is with chocolatey. Might want to try that first.

You can also download a Windows installer from the official site.

Another option is to use the Windows Subsystem for Linux and follow the Ubuntu instructions for installing PostgreSQL.

Arch Linux

Arch requires a bit more hands-on, but not much more. Check this out if you want to see a different Unix-y install procedure (or if you run Arch).

Assignment: Create a Table and Use It

Launch the shell on your database, and create a table.

CREATE TABLE Employee (ID INT, FirstName VARCHAR(20), LastName VARCHAR(20));

Insert some records:

INSERT INTO Employee VALUES (1, 'Alpha', 'Alphason');
INSERT INTO Employee VALUES (2, 'Bravo', 'Bravoson');
INSERT INTO Employee VALUES (3, 'Charlie', 'Charleson');
INSERT INTO Employee VALUES (4, 'Delta', 'Deltason');
INSERT INTO Employee VALUES (5, 'Echo', 'Ecoson');

Select all records:

SELECT * FROM Employee;

Select Employee #3's record:

SELECT * FROM Employee WHERE ID=3;

Delete Employee #3's record:

DELETE FROM Employee WHERE ID=3;

Use SELECT to verify the record is deleted.

Update Employee #2's name to be "Foxtrot Foxtrotson":

UPDATE Employee SET FirstName='Foxtrot', LastName='Foxtrotson' WHERE ID=2;

Use SELECT to verify the update.

Assignment: NodeJS Program to Create and Populate a Table

Using Node-Postgres, write a program that creates a table.

Run the following query from your JS code:

CREATE TABLE IF NOT EXISTS Earthquake
    (Name VARCHAR(20), Magnitude REAL)

Populate the table with the following data:

let data = [
    ["Earthquake 1", 2.2],
    ["Earthquake 2", 7.0],
    ["Earthquake 3", 1.8],
    ["Earthquake 4", 5.2],
    ["Earthquake 5", 2.9],
    ["Earthquake 6", 0.6],
    ["Earthquake 7", 6.6]
];

You'll have to run an INSERT statement for each one.

Open a PostgreSQL shell (psql) and verify the table exists:

user-> \dt
          List of relations
 Schema |    Name    | Type  | Owner 
--------+------------+-------+-------
 public | earthquake | table | user
(1 row)

Also verify it is populated:

user-> SELECT * from Earthquake;

     name     | magnitude 
--------------+-----------
 Earthquake 1 |       2.2
 Earthquake 2 |         7
 Earthquake 3 |       1.8
 Earthquake 4 |       5.2
 Earthquake 5 |       2.9
 Earthquake 6 |       0.6
 Earthquake 7 |       6.6
(7 rows)

Hints:

Extra Credit:

  • Add an ID column to help normalize the database. Make this column SERIAL to auto-increment.
  • Add Date, Lat, and Lon columns to record more information about the event.

Assignment: Command-line Earthquake Query Tool

Write a tool that queries the database for earthquakes that are at least a given magnitude.

$ node earthquake 2.9
Earthquakes with magnitudes greater than or equal to 2.9:

Earthquake 2: 7
Earthquake 7: 6.6
Earthquake 4: 5.2
Earthquake 5: 2.9

Use ORDER BY Magnitude DESC to order the results in descending order by magnitude.

Assignment: RESTful Earthquake Data Server

Use ExpressJS and write a webserver that implements a RESTful API to access the earthquake data.

Endpoints:

/ (GET) Output usage information in HTML.

Example results:

<html>
    <body>Usage: [endpoint info]</body>
</html>

/minmag (GET) Output JSON list of earthquakes that are larger than the value specified in the mag parameter. Use form encoding to pass the data.

Example results:

{
    "results": [
        {
            "name": "Earthquake 2",
            "magnitude": 7
        },
        {
            "name": "Earthquake 4",
            "magnitude": 5.2
        }
    ]
}

Extra Credit:

/new (POST) Add a new earthquake to the database. Use form encoding to pass name and mag. Return a JSON status message:

{ "status": "ok" }

or

{ "status": "error", "message": "[error message]" }

/delete (DELETE) Delete an earthquake from the database. Use form encoding to pass name. Return status similar to /new, above.

1-projects/solutions/

Table of Contents generated with DocToc

Sample Solutions

  1. Run npm install to install the prereqs.
  2. Run node maketable to create the DB tables.
  3. Run node earthquake 2.9 to see all earthquakes larger than magnitude 2.9.
1-projects/webapi-ii-challenge-master/

Table of Contents generated with DocToc

Building RESTful APIs with Express

Topics

  • Express Routing
  • Reading Request data from body and URL parameters
  • Sub-routes
  • API design and development.

Description

Use Node.js and Express to build an API that performs CRUD operations on blog posts.

Project Setup

  • Fork and Clone this repository.
  • CD into the folder where you cloned the repository.
  • Type npm install to download all dependencies.
  • To start the server, type npm run server from the root folder (where the package.json file is). The server is configured to restart automatically as you make changes.

Database Persistence Helpers

The data folder contains a database populated with test posts.

Database access will be done using the db.js file included inside the data folder.

The db.js publishes the following methods:

  • find(): calling find returns a promise that resolves to an array of all the posts contained in the database.
  • findById(): this method expects an id as it's only parameter and returns the post corresponding to the id provided or an empty array if no post with that id is found.
  • insert(): calling insert passing it a post object will add it to the database and return an object with the id of the inserted post. The object looks like this: { id: 123 }.
  • update(): accepts two arguments, the first is the id of the post to update and the second is an object with the changes to apply. It returns the count of updated records. If the count is 1 it means the record was updated correctly.
  • remove(): the remove method accepts an id as its first parameter and upon successfully deleting the post from the database it returns the number of records deleted.
  • findPostComments(): the findPostComments accepts a postId as its first parameter and returns all comments on the post associated with the post id.
  • findCommentById(): accepts an id and returns the comment associated with that id.
  • insertComment(): calling insertComment while passing it a comment object will add it to the database and return an object with the id of the inserted comment. The object looks like this: { id: 123 }. This method will throw an error if the post_id field in the comment object does not match a valid post id in the database.

Now that we have a way to add, update, remove and retrieve data from the provided database, it is time to work on the API.

Blog Post Schema

A Blog Post in the database has the following structure:

{
  title: "The post title", // String, required
  contents: "The post contents", // String, required
  created_at: Mon Aug 14 2017 12:50:16 GMT-0700 (PDT) // Date, defaults to current date
  updated_at: Mon Aug 14 2017 12:50:16 GMT-0700 (PDT) // Date, defaults to current date
}

Comment Schema

A Comment in the database has the following structure:

{
  text: "The text of the comment", // String, required
  post_id: "The id of the associated post", // Integer, required, must match the id of a post entry in the database
  created_at: Mon Aug 14 2017 12:50:16 GMT-0700 (PDT) // Date, defaults to current date
  updated_at: Mon Aug 14 2017 12:50:16 GMT-0700 (PDT) // Date, defaults to current date
}

Minimum Viable Product

  • Add the code necessary to implement the endpoints listed below.
  • Separate the endpoints that begin with /api/posts into a separate Express Router.

Endpoints

Configure the API to handle to the following routes:

Method Endpoint Description
POST /api/posts Creates a post using the information sent inside the request body.
POST /api/posts/:id/comments Creates a comment for the post with the specified id using information sent inside of the request body.
GET /api/posts Returns an array of all the post objects contained in the database.
GET /api/posts/:id Returns the post object with the specified id.
GET /api/posts/:id/comments Returns an array of all the comment objects associated with the post with the specified id.
DELETE /api/posts/:id Removes the post with the specified id and returns the deleted post object. You may need to make additional calls to the database in order to satisfy this requirement.
PUT /api/posts/:id Updates the post with the specified id using data from the request body. Returns the modified document, NOT the original.

Endpoint Specifications

When the client makes a POST request to /api/posts:

  • If the request body is missing the title or contents property:

    • cancel the request.
    • respond with HTTP status code 400 (Bad Request).
    • return the following JSON response: { errorMessage: "Please provide title and contents for the post." }.
  • If the information about the post is valid:

    • save the new post the the database.
    • return HTTP status code 201 (Created).
    • return the newly created post.
  • If there's an error while saving the post:

    • cancel the request.
    • respond with HTTP status code 500 (Server Error).
    • return the following JSON object: { error: "There was an error while saving the post to the database" }.

When the client makes a POST request to /api/posts/:id/comments:

  • If the post with the specified id is not found:

    • return HTTP status code 404 (Not Found).
    • return the following JSON object: { message: "The post with the specified ID does not exist." }.
  • If the request body is missing the text property:

    • cancel the request.
    • respond with HTTP status code 400 (Bad Request).
    • return the following JSON response: { errorMessage: "Please provide text for the comment." }.
  • If the information about the comment is valid:

    • save the new comment the the database.
    • return HTTP status code 201 (Created).
    • return the newly created comment.
  • If there's an error while saving the comment:

    • cancel the request.
    • respond with HTTP status code 500 (Server Error).
    • return the following JSON object: { error: "There was an error while saving the comment to the database" }.

When the client makes a GET request to /api/posts:

  • If there's an error in retrieving the posts from the database:
    • cancel the request.
    • respond with HTTP status code 500.
    • return the following JSON object: { error: "The posts information could not be retrieved." }.

When the client makes a GET request to /api/posts/:id:

  • If the post with the specified id is not found:

    • return HTTP status code 404 (Not Found).
    • return the following JSON object: { message: "The post with the specified ID does not exist." }.
  • If there's an error in retrieving the post from the database:

    • cancel the request.
    • respond with HTTP status code 500.
    • return the following JSON object: { error: "The post information could not be retrieved." }.

When the client makes a GET request to /api/posts/:id/comments:

  • If the post with the specified id is not found:

    • return HTTP status code 404 (Not Found).
    • return the following JSON object: { message: "The post with the specified ID does not exist." }.
  • If there's an error in retrieving the comments from the database:

    • cancel the request.
    • respond with HTTP status code 500.
    • return the following JSON object: { error: "The comments information could not be retrieved." }.

When the client makes a DELETE request to /api/posts/:id:

  • If the post with the specified id is not found:

    • return HTTP status code 404 (Not Found).
    • return the following JSON object: { message: "The post with the specified ID does not exist." }.
  • If there's an error in removing the post from the database:

    • cancel the request.
    • respond with HTTP status code 500.
    • return the following JSON object: { error: "The post could not be removed" }.

When the client makes a PUT request to /api/posts/:id:

  • If the post with the specified id is not found:

    • return HTTP status code 404 (Not Found).
    • return the following JSON object: { message: "The post with the specified ID does not exist." }.
  • If the request body is missing the title or contents property:

    • cancel the request.
    • respond with HTTP status code 400 (Bad Request).
    • return the following JSON response: { errorMessage: "Please provide title and contents for the post." }.
  • If there's an error when updating the post:

    • cancel the request.
    • respond with HTTP status code 500.
    • return the following JSON object: { error: "The post information could not be modified." }.
  • If the post is found and the new information is valid:

    • update the post document in the database using the new information sent in the request body.
    • return HTTP status code 200 (OK).
    • return the newly updated post.

Stretch Problems

To work on the stretch problems you'll need to enable the cors middleware. Follow these steps:

  • add the cors npm module: npm i cors.
  • add server.use(cors()) after server.use(express.json()).

Create a new React application and connect it to your server:

  • Use create-react-app to create an application inside the root folder, name it client.
  • From the React application connect to the /api/posts endpoint in the API and show the list of posts.
  • Style the list of posts however you see fit.
2-resources/__CHEAT-SHEETS/All/

Table of Contents generated with DocToc


title: 101 category: JavaScript libraries layout: 2017/sheet updated: 2017-09-21 intro: | 101 is a JavaScript library for dealing with immutable data in a functional manner.

Usage

const isObject = require('101/isObject')
isObject({}) // → true

Every function is exposed as a module.

See: 101

Type checking

isObject({})
isString('str')
isRegExp(/regexp/)
isBoolean(true)
isEmpty({})
isfunction(x => x)
isInteger(10)
isNumber(10.1)
instanceOf(obj, 'string')

Objects

{: .-three-column}

Example

{: .-prime}

let obj = {}

Update

obj = put(obj, 'user.name', 'John')
// → { user: { name: 'John' } }

Read

pluck(name, 'user.name')
// → 'John'

Delete

obj = del(obj, 'user')
// → { }

Getting

pluck(state, 'user.profile.name')
pick(state, ['user', 'ui'])
pick(state, /^_/)

pluck returns values, pick returns subsets of objects.

See: pluck, pick

Setting

put(state, 'user.profile.name', 'john')

See: put

Deleting

del(state, 'user.profile')
omit(state, ['user', 'data'])

omit is like del, but supports multiple keys to be deleted.

See: omit, del

Keypath check

hasKeypaths(state, ['user'])
hasKeypaths(state, { 'user.profile.name': 'john' })

See: hasKeypaths

Get values

values(state)

Functions

Simple functions

| and(x, y) | x && y | | or(x, y) | x || y | | xor(x, y) | !(!x && !y) && !(x && y) | | equals(x, y) | x === y | | exists(x) | !!x | | not(x) | !x |

Useful for function composition.

See: and, equals, exists

Composition

compose(f, g)       // x => f(g(x))
curry(f)            // x => y => f(x, y)
flip(f)             // f(x, y) --> f(y, x)

See: compose, curry, flip

And/or

passAll(f, g)       // x => f(x) && g(x)
passAny(f, g)       // x => f(x) || g(x)

See: passAll, passAny

Converge

converge(and, [pluck('a'), pluck('b')])(x)
// → and(pluck(x, 'a'), pluck(x, 'b'))

See: converge

Arrays

Finding

find(list, x => x.y === 2)
findIndex(list, x => ...)
includes(list, 'item')
last(list)
find(list, hasProps('id'))

Grouping

groupBy(list, 'id')
indexBy(list, 'id')

Examples

Function composition

isFloat = passAll(isNumber, compose(isInteger, not))
// n => isNumber(n) && not(isInteger(n))
function doStuff (object, options) { ... }

doStuffForce = curry(flip(doStuff))({ force: true })

Reference

Table of Contents generated with DocToc


title: Absinthe category: Hidden layout: 2017/sheet tags: [WIP] updated: 2017-10-10 intro: | Absinthe allows you to write GraphQL servers in Elixir.

Introduction

Concepts

  • Schema - The root. Defines what queries you can do, and what types they return.
  • Resolver - Functions that return data.
  • Type - A type definition describing the shape of the data you'll return.

Plug

web/router.ex

defmodule Blog.Web.Router do
  use Phoenix.Router

  forward "/", Absinthe.Plug,
    schema: Blog.Schema
end

{: data-line="4,5"}

Absinthe is a Plug, and you pass it one Schema.

See: Our first query

Main concepts

{: .-three-column}

Schema

web/schema.ex

defmodule Blog.Schema do
  use Absinthe.Schema
  import_types Blog.Schema.Types

  query do
    @desc "Get a list of blog posts"
    field :posts, list_of(:post) do
      resolve &Blog.PostResolver.all/2
    end
  end
end

{: data-line="5,6,7,8,9,10"}

This schema will account for { posts { ··· } }. It returns a Type of :post, and delegates to a Resolver.

Resolver

web/resolvers/post_resolver.ex

defmodule Blog.PostResolver do
  def all(_args, _info) do
    {:ok, Blog.Repo.all(Blog.Post)}
  end
end

{: data-line="3"}

This is the function that the schema delegated the posts query to.

Type

web/schema/types.ex

defmodule Blog.Schema.Types do
  use Absinthe.Schema.Notation

  @desc "A blog post"
  object :post do
    field :id, :id
    field :title, :string
    field :body, :string
  end
end

{: data-line="4,5,6,7,8,9"}

This defines a type :post, which is used by the resolver.

Schema

Query arguments

GraphQL query

{ user(id: "1") { ··· } }

web/schema.ex

query do
  field :user, type: :user do
    arg :id, non_null(:id)
    resolve &Blog.UserResolver.find/2
  end
end

{: data-line="3"}

Resolver

def find(%{id: id} = args, _info) do
  ···
end

{: data-line="1"}

See: Query arguments

Mutations

GraphQL query

{
  mutation CreatePost {
    post(title: "Hello") { id }
  }
}

web/schema.ex

mutation do
  @desc "Create a post"
  field :post, type: :post do
    arg :title, non_null(:string)
    resolve &Blog.PostResolver.create/2
  end
end

{: data-line="1"}

See: Mutations

References

Table of Contents generated with DocToc


title: ActiveAdmin category: Ruby layout: 2017/sheet

Listing scopes

Allows you to filter listings by a certain scope. {: .-setup}

scope :draft
scope :for_approval
scope :public, if: ->{ current_admin_user.can?(...) }
scope "Unapproved", :pending
scope("Published") { |books| books.where(:published: true) }

Sidebar filters

filter :email
filter :username

Custom actions

You can define custom actions for models. {: .-setup}

before_filter only: [:show, :edit, :publish] do
  @post = Post.find(params[:id])
end

Make the route

member_action :publish, method: :put do
  @post.publish!
  redirect_to admin_posts_path, notice: "The post '#{@post}' has been published!"
end

Link it in the index

index do
  column do |post|
    link_to 'Publish', publish_admin_post_path(post), method: :put
  end
end

And link it in show/edit

action_item only: [:edit, :show] do
  @post = Post.find(params[:id])
  link_to 'Publish', publish_admin_post_path(post), method: :put
end

Columns

column :foo
column :title, sortable: :name do |post|
  strong post.title
end

Other helpers

status_tag "Done"           # Gray
status_tag "Finished", :ok  # Green
status_tag "You", :warn     # Orange
status_tag "Failed", :error # Red

Disabling 'new post'

ActiveAdmin.register Post do
  actions :index, :edit
  # or: config.clear_action_items!
end

Table of Contents generated with DocToc


title: adb (Android Debug Bridge) category: CLI layout: 2017/sheet weight: -1 authors:

  • github: ZackNeyland updated: 2018-03-06

Device Basics

Command Description
adb devices Lists connected devices
adb devices -l Lists connected devices and kind
--- ---
adb root Restarts adbd with root permissions
adb start-server Starts the adb server
adb kill-server Kills the adb server
adb remount Remounts file system with read/write access
adb reboot Reboots the device
adb reboot bootloader Reboots the device into fastboot
adb disable-verity Reboots the device into fastboot

wait-for-device can be specified after adb to ensure that the command will run once the device is connected.

-s can be used to send the commands to a specific device when multiple are connected.

Examples

$ adb wait-for-device devices
 List of devices attached
 somedevice-1234 device
 someotherdevice-1234 device
$ adb -s somedevice-1234 root

Logcat

Command Description
adb logcat Starts printing log messages to stdout
adb logcat -g Displays current log buffer sizes
adb logcat -G <size> Sets the buffer size (K or M)
adb logcat -c Clears the log buffers
adb logcat *:V Enables ALL log messages (verbose)
adb logcat -f <filename> Dumps to specified file

Examples

$ adb logcat -G 16M
$ adb logcat *:V > output.log

File Management

Command Description
adb push <local> <remote> Copies the local to the device at remote
adb pull <remote> <local> Copies the remote from the device to local

Examples

$ echo "This is a test" > test.txt
$ adb push  test.txt /sdcard/test.txt
$ adb pull /sdcard/test.txt pulledTest.txt

Remote Shell

Command Description
adb shell <command> Runs the specified command on device (most unix commands work here)
adb shell wm size Displays the current screen resolution
adb shell wm size WxH Sets the resolution to WxH
adb shell pm list packages Lists all installed packages
adb shell pm list packages -3 Lists all installed 3rd-party packages
adb shell monkey -p app.package.name Starts the specified package

Table of Contents generated with DocToc


title: Google Analytics's analytics.js category: Analytics layout: 2017/sheet updated: 2017-10-29 intro: | Google Analytics's analytics.js is deprecated.

Page view

ga('create', 'UA-XXXX-Y', 'auto')
ga('create', 'UA-XXXX-Y', { userId: 'USER_ID' })
ga('send', 'pageview')
ga('send', 'pageview', { 'dimension15': 'My custom dimension' })

Events

ga('send', 'event', 'button',  'click', {color: 'red'});
ga('send', 'event', 'button',  'click', 'nav buttons',  4);
/*                  ^category  ^action  ^label          ^value */

Exceptions

ga('send', 'exception', {
  exDescription: 'DatabaseError',
  exFatal: false,
  appName: 'myapp',
  appVersion: '0.1.2'
})

Table of Contents generated with DocToc


title: Analytics libraries layout: 2017/sheet category: Analytics

Mixpanel

mixpanel.identify('284');
mixpanel.people.set({ $email: 'hi@gmail.com' });
mixpanel.register({ age: 28, gender: 'male' }); /* set common properties */

mixpanel {: .-crosslink}

Google Analytics's analytics.js

ga('create', 'UA-XXXX-Y', 'auto');
ga('create', 'UA-XXXX-Y', { userId: 'USER_ID' });
ga('send', 'pageview');
ga('send', 'pageview', { 'dimension15': 'My custom dimension' });

analytics.js {: .-crosslink}

Table of Contents generated with DocToc


title: Angular.js category: JavaScript libraries

    <html ng-app="nameApp">

Lists (ng-repeat)

    <ul ng-controller="MyListCtrl">
      <li ng-repeat="phone in phones">
        {{phone.name}}
      </li>
    </ul>

Model (ng-model)

    <select ng-model="orderProp">
      <option value="name">Alphabetical</option>
      <option value="age">Newest</option>
    </select>

Defining a module

    App = angular.module('myApp', []);

    App.controller('MyListCtrl', function ($scope) {
      $scope.phones = [ ... ];
    });

Controller with protection from minification

    App.controller('Name', [
      '$scope',
      '$http',
      function ($scope, $http) {
      }
    ]);

    a.c 'name', [
      '$scope'
      '$http'
      ($scope, $http) ->
    ]

Service

    App.service('NameService', function($http){
      return {
        get: function(){
          return $http.get(url);
        }
      }
    });

In controller you call with parameter and will use promises to return data from server.

    App.controller('controllerName',
    function(NameService){
      NameService.get()
      .then(function(){})
    })

Directive

    App.directive('name', function(){
      return {
        template: '<h1>Hello</h1>'
      }
    });

In HTML will use <name></name> to render your template <h1>Hello</h1>

HTTP

    App.controller('PhoneListCtrl', function ($scope, $http) {
        $http.get('/data.json').success(function (data) {
            $scope.phones = data;
        })
    });

References:

Table of Contents generated with DocToc


title: Animated GIFs category: CLI layout: 2017/sheet

Animated GIFs

{: .-one-column}

Convert MP4 to GIF

mkdir -p gif
mplayer -ao null -vo gif89a:outdir=gif $INPUT
mogrify -format gif *.png
gifsicle --colors=256 --delay=4 --loopcount=0 --dither -O3 gif/*.gif > ${INPUT%.*}.gif
rm -rf gif

You'll need mplayer, imagemagick and gifsicle. This converts frames to .png, then turns them into an animated gif.

A given range

mplayer -ao null -ss 0:02:06 -endpos 0:05:00 -vo gif89a:outdir=gif videofile.mp4

See -ss and -endpos.

Table of Contents generated with DocToc


title: Ansi codes category: CLI layout: 2017/sheet intro: | Quick reference to ANSI color codes.

Format

\033[#m

ANSI codes

0      clear
1      bold
4      underline
5      blink

30-37  fg color
40-47  bg color

1K     clear line (to beginning of line)
2K     clear line (entire line)
2J     clear screen
0;0H   move cursor to 0;0

1A     move up 1 line

Colors

0      black
1      red
2      green
3      yellow
4      blue
5      magenta
6      cyan
7      white

Bash utilities

hide_cursor() { printf "\e[?25l"; }
show_cursor() { printf "\e[?25h"; }

Table of Contents generated with DocToc


title: Ansible examples category: Ansible layout: 2017/sheet

Examples

Table of Contents generated with DocToc


title: "Ansible quickstart" category: Ansible layout: 2017/sheet description: | A quick guide to getting started with your first Ansible playbook.

Install Ansible

$ brew install ansible            # OSX
$ [sudo] apt install ansible      # elsewhere

Ansible is available as a package in most OS's.

See: Installation

Start your project

~$ mkdir setup
~$ cd setup

Make a folder for your Ansible files.

See: Getting started

Creating your files

Inventory file

~/setup/hosts

[sites]
127.0.0.1
192.168.0.1
192.168.0.2
192.168.0.3

This is a list of hosts you want to manage, grouped into groups. (Hint: try using localhost ansible_connection=local to deploy to your local machine.)

See: Intro to Inventory

Playbook

~/setup/playbook.yml

- hosts: 127.0.0.1
  user: root
  tasks:
    - name: install nginx
      apt: pkg=nginx state=present

    - name: start nginx every bootup
      service: name=nginx state=started enabled=yes

    - name: do something in the shell
      shell: echo hello > /tmp/abc.txt

    - name: install bundler
      gem: name=bundler state=latest

See: Intro to Playbooks

Running

Running ansible-playbook

~/setup$ ls
hosts
playbook.yml

Running the playbook

~/setup$ ansible-playbook -i hosts playbook.yml
PLAY [all] ********************************************************************

GATHERING FACTS ***************************************************************
ok: [127.0.0.1]

TASK: [install nginx] *********************************************************
ok: [127.0.0.1]

TASK: start nginx every bootup] ***********************************************
ok: [127.0.0.1]
...

Read more

Table of Contents generated with DocToc


title: Ansible modules category: Ansible layout: 2017/sheet prism_languages: [yaml] updated: 2017-10-03

{% raw %}

Format

Basic file

---
- hosts: production
  remote_user: root
  tasks:
  - ···

Place your modules inside tasks.

Task formats

One-line

- apt: pkg=vim state=present

Map

- apt:
    pkg: vim
    state: present

Foldable scalar

- apt: >
    pkg=vim
    state=present

Define your tasks in any of these formats. One-line format is preferred for short declarations, while maps are preferred for longer.

Modules

Aptitude

Packages

- apt:
    pkg: nodejs
    state: present # absent | latest
    update_cache: yes
    force: no

Deb files

- apt:
    deb: "https://packages.erlang-solutions.com/erlang-solutions_1.0_all.deb"

Repositories

- apt_repository:
    repo: "deb https://··· raring main"
    state: present

Repository keys

- apt_key:
    id: AC40B2F7
    url: "http://···"
    state: present

git

- git:
    repo: git://github.com/
    dest: /srv/checkout
    version: master
    depth: 10
    bare: yes

See: git module

git_config

- git_config:
    name: user.email
    scope: global # local | system
    value: hi@example.com

See: git_config module

user

- user:
    state: present
    name: git
    system: yes
    shell: /bin/sh
    groups: admin
    comment: "Git Version Control"

See: user module

service

- service:
    name: nginx
    state: started
    enabled: yes     # optional

See: service module

Shell

shell

- shell: apt-get install nginx -y

Extra options

- shell: echo hello
  args:
    creates: /path/file  # skip if this exists
    removes: /path/file  # skip if this is missing
    chdir: /path         # cd here before running

Multiline example

- shell: |
    echo "hello there"
    echo "multiple lines"

See: shell module

script

- script: /x/y/script.sh
  args:
    creates: /path/file  # skip if this exists
    removes: /path/file  # skip if this is missing
    chdir: /path         # cd here before running

See: script module

Files

file

- file:
    path: /etc/dir
    state: directory # file | link | hard | touch | absent

    # Optional:
    owner: bin
    group: wheel
    mode: 0644
    recurse: yes  # mkdir -p
    force: yes    # ln -nfs

See: file module

copy

- copy:
    src: /app/config/nginx.conf
    dest: /etc/nginx/nginx.conf

    # Optional:
    owner: user
    group: user
    mode: 0644
    backup: yes

See: copy module

template

- template:
    src: config/redis.j2
    dest: /etc/redis.conf

    # Optional:
    owner: user
    group: user
    mode: 0644
    backup: yes

See: template module

Local actions

local_action

- name: do something locally
  local_action: shell echo hello

debug

- debug:
    msg: "Hello {{ var }}"

See: debug module {% endraw %}

Table of Contents generated with DocToc


title: Ansible roles category: Ansible layout: 2017/sheet

Structure

roles/
  common/
    tasks/
    handlers/
    files/              # 'copy' will refer to this
    templates/          # 'template' will refer to this
    meta/               # Role dependencies here
    vars/
    defaults/
      main.yml

References

Table of Contents generated with DocToc


title: Ansible category: Ansible

{% raw %}

Getting started

Hosts

$ sudo mkdir /etc/ansible
$ sudo vim /etc/ansible/hosts

[example]
192.0.2.101
192.0.2.102

Running a playbook

$ ansible-playbook playbook.yml

Tasks

- hosts: all
  user: root
  sudo: no
  vars:
    aaa: bbb
  tasks:
    - ...
  handlers:
    - ...

Includes

tasks:
  - include: db.yml
handlers:
  - include: db.yml user=timmy

Handlers

handlers:
  - name: start apache2
    action: service name=apache2 state=started

tasks:
  - name: install apache
    action: apt pkg=apache2 state=latest
    notify:
      - start apache2

Vars

- host: lol
  vars_files:
    - vars.yml
  vars:
    project_root: /etc/xyz
  tasks:
    - name: Create the SSH directory.
      file: state=directory path=${project_root}/home/.ssh/
      only_if: "$vm == 0"

Roles

- host: xxx
  roles:
    - db
    - { role:ruby, sudo_user:$user }
    - web

# Uses:
# roles/db/tasks/*.yml
# roles/db/handlers/*.yml

Task: Failures

- name: my task
  command: ...
  register: result
  failed_when: "'FAILED' in result.stderr"

  ignore_errors: yes

  changed_when: "result.rc != 2"

Env vars

vars:
  local_home: "{{ lookup('env','HOME') }}"

References

{% endraw %}

Table of Contents generated with DocToc


title: Appcache category: HTML layout: 2017/sheet

Format

CACHE MANIFEST
# version

CACHE:
http://www.google.com/jsapi
/assets/app.js
/assets/bg.png

NETWORK:
*

Note that Appcache is deprecated!

See: Using the application cache (developer.mozilla.org)

Table of Contents generated with DocToc


title: AppleScript updated: 2018-12-06 layout: 2017/sheet category: macOS prism_languages: [applescript]

Running

osascript -e "..."
display notification "X" with title "Y"

Comments

-- This is a single line comment
# This is another single line comment
(*
This is
a multi
line comment
*)

Say

-- default voice
say "Hi I am a Mac"
-- specified voice
say "Hi I am a Mac" using "Zarvox"

Beep

-- beep once
beep
-- beep 10 times
beep 10

Delay

-- delay for 5 seconds
delay 5

Table of Contents generated with DocToc


title: Applinks category: HTML

<meta property="al:ios:url" content="applinks://docs" />
<meta property="al:ios:app_store_id" content="12345" />
<meta property="al:ios:app_name" content="App Links" />

<meta property="al:android:url" content="applinks://docs" />
<meta property="al:android:app_name" content="App Links" />
<meta property="al:android:package" content="org.applinks" />

<meta property="al:web:url" content="http://applinks.org/documentation" />

Device types

  • ios
  • ipad
  • iphone
  • android
  • windows_phone
  • web

Reference

Table of Contents generated with DocToc


title: Arel category: Rails

Tables

users = Arel::Table.new(:users)
users = User.arel_table  # ActiveRecord model

Fields

users[:name]
users[:id]

where (restriction)

users.where(users[:name].eq('amy'))
# SELECT * FROM users WHERE users.name = 'amy'

select (projection)

users.project(users[:id])
# SELECT users.id FROM users

join

basic join

In ActiveRecord (without Arel), if :photos is the name of the association, use joins

users.joins(:photos)

In Arel, if photos is defined as the Arel table,

photos = Photo.arel_table
users.join(photos) 
users.join(photos, Arel::Nodes::OuterJoin).on(users[:id].eq(photos[:user_id]))

join with conditions

users.joins(:photos).merge(Photo.where(published: true))

If the simpler version doesn't help and you want to add more SQL statements to it:

10000
users.join(
   users.join(photos, Arel::Nodes::OuterJoin)
   .on(photos[:user_id].eq(users[:id]).and(photos[:published].eq(true)))
)

advanced join

multiple joins with the same table but different meanings and/or conditions

creators = User.arel_table.alias('creators')
updaters = User.arel_table.alias('updaters')
photos = Photo.arel_table

photos_with_credits = photos
.join(photos.join(creators, Arel::Nodes::OuterJoin).on(photos[:created_by_id].eq(creators[:id])))
.join(photos.join(updaters, Arel::Nodes::OuterJoin).on(photos[:assigned_id].eq(updaters[:id])))
.project(photos[:name], photos[:created_at], creators[:name].as('creator'), updaters[:name].as('editor'))

photos_with_credits.to_sql
# => "SELECT `photos`.`name`, `photos`.`created_at`, `creators`.`name` AS creator, `updaters`.`name` AS editor FROM `photos` INNER JOIN (SELECT FROM `photos` LEFT OUTER JOIN `users` `creators` ON `photos`.`created_by_id` = `creators`.`id`) INNER JOIN (SELECT FROM `photos` LEFT OUTER JOIN `users` `updaters` ON `photos`.`updated_by_id` = `updaters`.`id`)"

# after the request is done, you can use the attributes you named
# it's as if every Photo record you got has "creator" and "editor" fields, containing creator name and editor name
photos_with_credits.map{|x|
  "#{photo.name} - copyright #{photo.created_at.year} #{photo.creator}, edited by #{photo.editor}"
}.join('; ')

limit / offset

users.take(5) # => SELECT * FROM users LIMIT 5
users.skip(4) # => SELECT * FROM users OFFSET 4

Aggregates

users.project(users[:age].sum) # .average .minimum .maximum
users.project(users[:id].count)
users.project(users[:id].count.as('user_count'))

order

users.order(users[:name])
users.order(users[:name], users[:age].desc)
users.reorder(users[:age])

With ActiveRecord

User.arel_table
User.where(id: 1).arel

Clean code with arel

Most of the clever stuff should be in scopes, e.g. the code above could become:

photos_with_credits = Photo.with_creator.with_editor

You can store requests in variables then add SQL segments:

all_time      = photos_with_credits.count
this_month    = photos_with_credits.where(photos[:created_at].gteq(Date.today.beginning_of_month))
recent_photos = photos_with_credits.where(photos[:created_at].gteq(Date.today.beginning_of_month)).limit(5)

Reference

Table of Contents generated with DocToc


title: Atom category: Apps layout: 2017/sheet updated: 2018-06-14

Shortcuts

{: .-three-column}

Tree

Shortcut Description
⌘\ Toggle tree
⌘⇧\ Reveal current file
{: .-shortcuts}

Comments

Shortcut Description
⌘/ Toggle comments
{: .-shortcuts}

View

Shortcut Description
⌘k Split pane to the left
--- ---
⌘⌥= Grow pane
⌘⌥- Shrink pane
--- ---
^⇧← Move tab to left
{: .-shortcuts}

Bracket matcher

Shortcut Description
^m Go to matching bracket
^] Remove brackets from selection
^⌘m Select inside brackets
⌥⌘. Close tag
{: .-shortcuts}

Symbols view

Shortcut Description
^⌥↓ Jump to declaration under cursor
^⇧r Show tags
{: .-shortcuts}

Symbols view enables Ctags support for Atom.

See: Symbols view

Git

| ^⇧9 | Show Git pane | | ^⇧8 | Show GitHub pane | {: .-shortcuts}

Editing

Shortcut Description
⌘d Select word
⌘l Select line
--- ---
⌘↓ Move line down
⌘↑ Move line up
--- ---
⌘⏎ New line below
⌘⇧⏎ New line above
--- ---
⌘⇧k Delete line
⌘⇧d Duplicate line
{: .-shortcuts}

Project

Shortcut Description
⌘⇧p Command palette
⌘⇧a Add project folder
--- ---
⌘n New file
⌘⇧n New window
--- ---
⌘f Find in file
⌘⇧f Find in project
⌘t Search files in project
{: .-shortcuts}

Notes

  • For Windows and Linux, is the Control key.
  • For macOS, it's the Command key.

  • For Windows and Linux, is the Alt key.
  • For macOS, it's the Option key.

Table of Contents generated with DocToc


title: Awesome Redux category: React layout: 2017/sheet updated: 2017-11-19

redux-actions

Create action creators in flux standard action format. {: .-setup}

increment = createAction('INCREMENT', amount => amount)
increment = createAction('INCREMENT')  // same
increment(42) === { type: 'INCREMENT', payload: 42 }
// Errors are handled for you:
err = new Error()
increment(err) === { type: 'INCREMENT', payload: err, error: true }

redux-actions {: .-crosslink}

flux-standard-action

A standard for flux action objects. An action may have an error, payload and meta and nothing else. {: .-setup}

{ type: 'ADD_TODO', payload: { text: 'Work it' } }
{ type: 'ADD_TODO', payload: new Error(), error: true }

flux-standard-action {: .-crosslink}

redux-multi

Dispatch multiple actions in one action creator. {: .-setup}

store.dispatch([
  { type: 'INCREMENT', payload: 2 },
  { type: 'INCREMENT', payload: 3 }
])

redux-multi {: .-crosslink}

reduce-reducers

Combines reducers (like combineReducers()), but without namespacing magic. {: .-setup}

re = reduceReducers(
  (state, action) => state + action.number,
  (state, action) => state + action.number
)

re(10, { number: 2 })  //=> 14

reduce-reducers {: .-crosslink}

redux-logger

Logs actions to your console. {: .-setup}

// Nothing to see here

redux-logger {: .-crosslink}

Async

redux-promise

Pass promises to actions. Dispatches a flux-standard-action. {: .-setup}

increment = createAction('INCREMENT')  // redux-actions
increment(Promise.resolve(42))

redux-promise {: .-crosslink}

redux-promises

Sorta like that, too. Works by letting you pass thunks (functions) to dispatch(). Also has 'idle checking'. {: .-setup}

fetchData = (url) => (dispatch) => {
  dispatch({ type: 'FETCH_REQUEST' })
  fetch(url)
    .then((data) => dispatch({ type: 'FETCH_DONE', data })
    .catch((error) => dispatch({ type: 'FETCH_ERROR', error })
})

store.dispatch(fetchData('/posts'))
// That's actually shorthand for:
fetchData('/posts')(store.dispatch)

redux-promises {: .-crosslink}

redux-effects

Pass side effects declaratively to keep your actions pure. {: .-setup}

{
  type: 'EFFECT_COMPOSE',
  payload: {
    type: 'FETCH'
    payload: {url: '/some/thing', method: 'GET'}
  },
  meta: {
    steps: [ [success, failure] ]
  }
}

redux-effects {: .-crosslink}

redux-thunk

Pass "thunks" to as actions. Extremely similar to redux-promises, but has support for getState. {: .-setup}

fetchData = (url) => (dispatch, getState) => {
  dispatch({ type: 'FETCH_REQUEST' })
  fetch(url)
    .then((data) => dispatch({ type: 'FETCH_DONE', data })
    .catch((error) => dispatch({ type: 'FETCH_ERROR', error })
})

store.dispatch(fetchData('/posts'))
// That's actually shorthand for:
fetchData('/posts')(store.dispatch, store.getState)
// Optional: since fetchData returns a promise, it can be chained
// for server-side rendering
store.dispatch(fetchPosts()).then(() => {
  ReactDOMServer.renderToString(<MyApp store={store} />)
})

redux-thunk {: .-crosslink}

Table of Contents generated with DocToc


title: AWS CLI category: Devops layout: 2017/sheet

EC2

aws ec2 describe-instances
aws ec2 start-instances --instance-ids i-12345678c
aws ec2 terminate-instances --instance-ids i-12345678c

S3

aws s3 ls s3://mybucket
aws s3 rm s3://mybucket/folder --recursive
aws s3 cp myfolder s3://mybucket/folder --recursive
aws s3 sync myfolder s3://mybucket/folder --exclude *.tmp

ECS

aws ecs create-cluster
  --cluster-name=NAME
  --generate-cli-skeleton

aws ecs create-service

Homebrew

brew install awscli
aws configure

Configuration profiles

aws configure --profile project1
aws configure --profile project2

Elastic Beanstalk

Configuration

  • .elasticbeanstalk/config.yml - application config
  • .elasticbeanstalk/dev-env.env.yml - environment config
eb config

See: http://docs.aws.amazon.com/elasticbeanstalk/latest/dg/command-options.html

ebextensions

Also see

Table of Contents generated with DocToc


title: Backbone.js layout: 2017/sheet updated: 2018-12-06 category: JavaScript libraries

Binding events

.on('event', callback)
.on('event', callback, context)
.on({
  'event1': callback,
  'event2': callback
})
.on('all', callback)
.once('event', callback)   // Only happens once

Unbinding events

object.off('change', onChange)    // just the `onChange` callback
object.off('change')              // all 'change' callbacks
object.off(null, onChange)        // `onChange` callback for all events
object.off(null, null, context)   // all callbacks for `context` all events
object.off()                      // all

Events

object.trigger('event')
view.listenTo(object, event, callback)
view.stopListening()

List of events

  • Collection:

    • add (model, collection, options)
    • remove (model, collection, options)
    • reset (collection, options)
    • sort (collection, options)
  • Model:

    • change (model, options)
    • change:[attr] (model, value, options)
    • destroy (model, collection, options)
    • error (model, xhr, options)
  • Model and collection:

    • request (model, xhr, options)
    • sync (model, resp, options)
  • Router:

    • route:[name] (params)
    • route (router, route, params)

Views

Defining

// All attributes are optional
var View = Backbone.View.extend({
  model: doc,
  tagName: 'div',
  className: 'document-item',
  id: "document-" + doc.id,
  attributes: { href: '#' },
  el: 'body',
  events: {
    'click button.save': 'save',
    'click .cancel': function() { ··· },
    'click': 'onclick'
  },
  constructor: function() { ··· },
  render: function() { ··· }
})

Instantiating

view = new View()
view = new View({ el: ··· })

Methods

view.$el.show()
view.$('input')
view.remove()
view.delegateEvents()
view.undelegateEvents()

Models

Defining

// All attributes are optional
var Model = Backbone.Model.extend({
  defaults: {
    'author': 'unknown'
  },
  idAttribute: '_id',
  parse: function() { ··· }
})

Instantiating

var obj = new Model({ title: 'Lolita', author: 'Nabokov' })
var obj = new Model({ collection: ··· })

Methods

obj.id
obj.cid   // → 'c38' (client-side ID)
obj.clone()
obj.hasChanged('title')
obj.changedAttributes()  // false, or hash
obj.previousAttributes() // false, or hash
obj.previous('title')
obj.isNew()
obj.set({ title: 'A Study in Pink' })
obj.set({ title: 'A Study in Pink' }, { validate: true, silent: true })
obj.unset('title')
obj.get('title')
obj.has('title')
obj.escape('title')     /* Like .get() but HTML-escaped */
obj.clear()
obj.clear({ silent: true })
obj.save()
obj.save({ attributes })
obj.save(null, {
  silent: true, patch: true, wait: true,
  success: callback, error: callback
})
obj.destroy()
obj.destroy({
  wait: true,
  success: callback, error: callback
})
obj.toJSON()
obj.fetch()
obj.fetch({ success: callback, error: callback })

Validation

var Model = Backbone.Model.extend({
  validate: function(attrs, options) {
    if (attrs.end < attrs.start) {
      return "Can't end before it starts"
    }
  }
})

{: data-line="2"}

obj.validationError  //=> "Can't end before it starts"
obj.isValid()
obj.on('invalid', function (model, error) { ··· })
// Triggered on:
obj.save()
obj.set({ ··· }, { validate: true })

Custom URLs

var Model = Backbone.Model.extend({
  // Single URL (string or function)
  url: '/account',
  url: function() { return '/account' },
  // Both of these two work the same way
  url: function() { return '/books/' + this.id }),
  urlRoot: '/books'
})
var obj = new Model({ url: ··· })
var obj = new Model({ urlRoot: ··· })

References

{: .-one-column}

Table of Contents generated with DocToc


title: Code badges

Here are some badges for open source projects.

Badge markdown

Travis
[![Status](https://travis-ci.org/rstacruz/REPO.svg?branch=master)](https://travis-ci.org/rstacruz/REPO)  
CodeClimate (shields.io)
[![CodeClimate](http://img.shields.io/codeclimate/github/rstacruz/REPO.svg?style=flat)](https://codeclimate.com/github/rstacruz/REPO 
"CodeClimate")

Coveralls (shields.io)
[![Coveralls](http://img.shields.io/coveralls/rstacruz/REPO.svg?style=flat)](https://coveralls.io/r/rstacruz/REPO)

Travis (shields.io)
[![Status](http://img.shields.io/travis/rstacruz/REPO/master.svg?style=flat)](https://travis-ci.org/rstacruz/REPO "See test builds")

NPM (shields.io)
[![npm version](http://img.shields.io/npm/v/REPO.svg?style=flat)](https://npmjs.org/package/REPO "View this project on npm")

Ruby gem (shields.io)
[![Gem](https://img.shields.io/gem/v/GEMNAME.svg?style=flat)](http://rubygems.org/gems/GEMNAME "View this project in Rubygems")

Etc

Gitter chat
[![Gitter chat](https://badges.gitter.im/USER/REPO.png)](https://gitter.im/REPO/GITTERROOM "Gitter chat")

Gitter chat (shields.io)
[![Chat](http://img.shields.io/badge/gitter-USER / REPO-blue.svg)]( https://gitter.im/USER/REPO )

david-dm
[![Dependencies](http://img.shields.io/david/rstacruz/REPO.svg?style=flat)](https://david-dm.org/rstacruz/REPO)

[![MIT license](http://img.shields.io/badge/license-MIT-brightgreen.svg)](http://opensource.org/licenses/MIT)

[![MIT license](http://img.shields.io/badge/license-MIT-brightgreen.svg)](http://opensource.org/licenses/MIT)

Support stuff

Support
-------

__Bugs and requests__: submit them through the project's issues tracker.<br>
[![Issues](http://img.shields.io/github/issues/USER/REPO.svg)]( https://github.com/USER/REPO/issues )

__Questions__: ask them at StackOverflow with the tag *REPO*.<br>
[![StackOverflow](http://img.shields.io/badge/stackoverflow-REPO-blue.svg)]( http://stackoverflow.com/questions/tagged/REPO )

__Chat__: join us at gitter.im.<br>
[![Chat](http://img.shields.io/badge/gitter.im-USER/REPO-blue.svg)]( https://gitter.im/USER/REPO )

Frontend js installation

Installation
------------

Add [nprogress.js] and [nprogress.css] to your project.

```html
<script src='nprogress.js'></script>
<link rel='stylesheet' href='nprogress.css'/>
```

NProgress is available via [bower] and [npm].

    $ bower install --save nprogress
    $ npm install --save nprogress

[bower]: http://bower.io/search/?q=nprogress
[npm]: https://www.npmjs.org/package/nprogress

Acknowledgements

**PROJECTNAME** &copy; 2014+, Rico Sta. Cruz. Released under the [MIT] License.<br>
Authored and maintained by Rico Sta. Cruz with help from contributors ([list][contributors]).

> [ricostacruz.com](http://ricostacruz.com) &nbsp;&middot;&nbsp;
> GitHub [@rstacruz](https://github.com/rstacruz) &nbsp;&middot;&nbsp;
> Twitter [@rstacruz](https://twitter.com/rstacruz)

[MIT]: http://mit-license.org/
[contributors]: http://github.com/rstacruz/nprogress/contributors

Links

Table of Contents generated with DocToc


title: Bash scripting category: CLI layout: 2017/sheet tags: [Featured] updated: 2020-07-05 keywords:

  • Variables
  • Functions
  • Interpolation
  • Brace expansions
  • Loops
  • Conditional execution
  • Command substitution

Getting started

{: .-three-column}

Introduction

{: .-intro}

This is a quick reference to getting started with Bash scripting.

Example

#!/usr/bin/env bash

NAME="John"
echo "Hello $NAME!"

Variables

NAME="John"
echo $NAME
echo "$NAME"
echo "${NAME}!"

String quotes

NAME="John"
echo "Hi $NAME"  #=> Hi John
echo 'Hi $NAME'  #=> Hi $NAME

Shell execution

echo "I'm in $(pwd)"
echo "I'm in `pwd`"
# Same

See Command substitution

Conditional execution

git commit && git push
git commit || echo "Commit failed"

Functions

{: id='functions-example'}

get_name() {
  echo "John"
}

echo "You are $(get_name)"

See: Functions

Conditionals

{: id='conditionals-example'}

if [[ -z "$string" ]]; then
  echo "String is empty"
elif [[ -n "$string" ]]; then
  echo "String is not empty"
fi

See: Conditionals

Strict mode

set -euo pipefail
IFS=$'\n\t'

See: Unofficial bash strict mode

Brace expansion

echo {A,B}.js
Expression Description
{A,B} Same as A B
{A,B}.js Same as A.js B.js
{1..5} Same as 1 2 3 4 5

See: Brace expansion

Parameter expansions

{: .-three-column}

Basics

name="John"
echo ${name}
echo ${name/J/j}    #=> "john" (substitution)
echo ${name:0:2}    #=> "Jo" (slicing)
echo ${name::2}     #=> "Jo" (slicing)
echo ${name::-1}    #=> "Joh" (slicing)
echo ${name:(-1)}   #=> "n" (slicing from right)
echo ${name:(-2):1} #=> "h" (slicing from right)
echo ${food:-Cake}  #=> $food or "Cake"
length=2
echo ${name:0:length}  #=> "Jo"

See: Parameter expansion

STR="/path/to/foo.cpp"
echo ${STR%.cpp}    # /path/to/foo
echo ${STR%.cpp}.o  # /path/to/foo.o
echo ${STR%/*}      # /path/to

echo ${STR##*.}     # cpp (extension)
echo ${STR##*/}     # foo.cpp (basepath)

echo ${STR#*/}      # path/to/foo.cpp
echo ${STR##*/}     # foo.cpp

echo ${STR/foo/bar} # /path/to/bar.cpp
STR="Hello world"
echo ${STR:6:5}   # "world"
echo ${STR: -5:5}  # "world"
SRC="/path/to/foo.cpp"
BASE=${SRC##*/}   #=> "foo.cpp" (basepath)
DIR=${SRC%$BASE}  #=> "/path/to/" (dirpath)

Substitution

Code Description
${FOO%suffix} Remove suffix
${FOO#prefix} Remove prefix
--- ---
${FOO%%suffix} Remove long suffix
${FOO##prefix} Remove long prefix
--- ---
${FOO/from/to} Replace first match
${FOO//from/to} Replace all
--- ---
${FOO/%from/to} Replace suffix
${FOO/#from/to} Replace prefix

Comments

# Single line comment
: '
This is a
multi line
comment
'

Substrings

Expression Description
${FOO:0:3} Substring (position, length)
${FOO:(-3):3} Substring from the right

Length

Expression Description
${#FOO} Length of $FOO

Manipulation

STR="HELLO WORLD!"
echo ${STR,}   #=> "hELLO WORLD!" (lowercase 1st letter)
echo ${STR,,}  #=> "hello world!" (all lowercase)

STR="hello world!"
echo ${STR^}   #=> "Hello world!" (uppercase 1st letter)
echo ${STR^^}  #=> "HELLO WORLD!" (all uppercase)

Default values

Expression Description
${FOO:-val} $FOO, or val if unset (or null)
${FOO:=val} Set $FOO to val if unset (or null)
${FOO:+val} val if $FOO is set (and not null)
${FOO:?message} Show error message and exit if $FOO is unset (or null)

Omitting the : removes the (non)nullity checks, e.g. ${FOO-val} expands to val if unset otherwise $FOO.

Loops

{: .-three-column}

Basic for loop

for i in /etc/rc.*; do
  echo $i
done

C-like for loop

for ((i = 0 ; i < 100 ; i++)); do
  echo $i
done

Ranges

for i in {1..5}; do
    echo "Welcome $i"
done

With step size

for i in {5..50..5}; do
    echo "Welcome $i"
done

Reading lines

cat file.txt | while read line; do
  echo $line
done

Forever

while true; do
  ···
done

Functions

{: .-three-column}

Defining functions