-
Notifications
You must be signed in to change notification settings - Fork 2
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Release/v0.1.0 #6
Open
stephenjfox
wants to merge
14
commits into
master
Choose a base branch
from
release/v0.1.0
base: master
Could not load branches
Branch not found: {{ refName }}
Loading
Could not load tags
Nothing to show
Loading
Are you sure you want to change the base?
Some commits from the old base branch may be removed from the timeline,
and old review comments may become outdated.
Open
Conversation
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
The code that was in shrink was utility code that I found uses for in at least two other places + Satisfies my rule of three occurences + Led to successful porting of notebook code
But showing that some things are too granual right now and may be less flexible than I want them to be. * API usage shouldn't be so dependent on passing around strings - That's kind of pathetic + Make an informed decision about what's best for the API, then do it. * Since redo_layer() is so effective, use it to lower the import entity count. * NIT: make_children_list should be something like 'children_as_list'
It's been missing seen the project restructure and that can get lumped in here
... of the project's state and intent. * All of the pieces are there, but they still aren't connected * That's a poor show of engineering quality
…ilizing-utilities_2019-02-12
* Use more utilities, new abstraction * Add layer inspectors * Add errors and utils * Calculate a new layer's size Rather than having to reinstatiate layers, or some sub-optimal traversal of a neural architectures' nn.Modules, just do the math for 'in-size' and 'out-size' for a given layer to make the math easier. * Clarify demo.py * Trim down resize_layers in nn.widen module * Complete resize_layers for #2 Need a 'fit_layers', for when shrinking/pruning clips out too many neuron connections
__Solution__: When running tests from the command line, ___always__ import the module `-m unittest` and use the verbose (`-v`) flag to see where Python is breaking up. * Then you'll be able to trace down the problem. * Do this a few times, and it will be reflex
* Attempt to test for #4 PyTorch's boolean comparison crap isn't useful and makes it a pain to test exact tensor values. * Will resume later * Skipping sparsify test It's a painfully simple function that has worked every time I've used it. - No it doesn't handle every edge case + Yes, it gets the job done and can be packaged for the general case * Use instance `.nonzero()` instead of `torch.nonzero()` * Fix "type-check" in layer inspectors * WIP: Implement shrink() in terms of resize_layers() It was as easy as I wanted it to be. * The complexity is how to handle a given nested layer + Those will get implemented with a given feature - Need to program feature detection TODO: + Implement the resizing on a layer-by-layer case, to make the shrinking a bit different + Instead of applying the data transformation uniformly, each layer gets + Those factors will be computed as 1 - percent_waste(layer) * Lay out skeleton for the true shrinking algo #4 * shrink_layer() is simple * Justification for giving Shrinkage a 'input_dimensions' property: > The thought is that channel depth doesn't change the output dimensions for CNNs, and that's attribute we're concerned with in the convulotional case... * Linear layers only have two dimensions, so it's a huge deal there. * RNNs do linear things over 'timesteps', so it's a big deal there. * Residual/identity/skip-connections in CNNs need this. > __It's decided__. The attribute stays
rishab-sharma
previously approved these changes
Apr 25, 2019
1. simple sequential linear model for testing 2. Simple 3-layer conv net, with a brief explanation of project goals
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
These are covered well in the Project, but to summarize:
Crashes on bad import. Confusing documentation. Network mangling
What is the new behavior (if this is a feature change)?
nn.Conv2d
andnn.Linear
layersmorph._models
Does this PR introduce a breaking change? (What changes might users need to make in their application due to this PR?)
Ooooh yes. Just invoke
morph.once(model)
. The initial API made it use uncomfortable, but the new example outlines the user-centric changes proposed herein.Should not be merged until issues #2, #3, and #4 are finished, let alone others that need completion