==============================
pricing data postcode data Borough Profile Data
Missing completely at random (MCAR)
, data are missing independently of both observed and unobserved data.Missing at random (MAR)
, given the observed data, data are missing independently of unobserved data.Missing Not at Random (MNAR)
, missing observations related to values of unobserved data.
- Divide into Datasets by Borough.
- check for NaN in python list
- for each 100 items
- call the api
- parse the response
- map the response
- load df with postcodes
- batch by area code
- groupby areacode
- open respective csv file ('NSPL_MAY_2019_UK_{}.csv').format(areacode)
- choose columns wanted, lookup and append to respective row in df
- postcode lookup (2628568, 41)
- our addresses (345551, 16)
- our merged (157472, 57)
- this means that 188,079 addresses were actually duplicates?
- Combine the lat and the long into 1 feature
- Imputation
- Correlation/covariance
- Feature Selection
- Plot postcodes on Map
- Plot GPS Coordinates
This work very well if you have few categories, however in the case of thousands of IDs this will increase your dimensionality too much. What you can do is collect statistics about the target and other features per group and join these onto your set and then remove your categories. This is what is usually done with a high number of categories. You have to be careful not to leak any information about your target into your features though (problem called label leaking).
- Transformation
- Normal Distribution
- Cross Validation
- Linear Regression
- Support Vector Machine
- PCA
- Gaussian and NB
- KNN
- Naive Bayes
- Perceptron
- Neural Net
βββ LICENSE
βββ Makefile <- Makefile with commands like `make data` or `make train`
βββ README.md <- The top-level README for developers using this project.
βββ data
βΒ Β βββ external <- Data from third party sources.
βΒ Β βββ interim <- Intermediate data that has been transformed.
βΒ Β βββ processed <- The final, canonical data sets for modeling.
βΒ Β βββ raw <- The original, immutable data dump.
β
βββ docs <- A default Sphinx project; see sphinx-doc.org for details
β
βββ models <- Trained and serialized models, model predictions, or model summaries
β
βββ notebooks <- Jupyter notebooks. Naming convention is a number (for ordering),
β the creator's initials, and a short `-` delimited description, e.g.
β `1.0-jqp-initial-data-exploration`.
β
βββ references <- Data dictionaries, manuals, and all other explanatory materials.
β
βββ reports <- Generated analysis as HTML, PDF, LaTeX, etc.
βΒ Β βββ figures <- Generated graphics and figures to be used in reporting
β
βββ requirements.txt <- The requirements file for reproducing the analysis environment, e.g.
β generated with `pip freeze > requirements.txt`
β
βββ setup.py <- makes project pip installable (pip install -e .) so src can be imported
βββ src <- Source code for use in this project.
βΒ Β βββ __init__.py <- Makes src a Python module
β β
βΒ Β βββ data <- Scripts to download or generate data
βΒ Β βΒ Β βββ make_dataset.py
β β
βΒ Β βββ features <- Scripts to turn raw data into features for modeling
βΒ Β βΒ Β βββ build_features.py
β β
βΒ Β βββ models <- Scripts to train models and then use trained models to make
β β β predictions
βΒ Β βΒ Β βββ predict_model.py
βΒ Β βΒ Β βββ train_model.py
β β
βΒ Β βββ visualization <- Scripts to create exploratory and results oriented visualizations
βΒ Β βββ visualize.py
β
βββ tox.ini <- tox file with settings for running tox; see tox.testrun.org
Project based on the cookiecutter data science project template. #cookiecutterdatascience