-
Notifications
You must be signed in to change notification settings - Fork 18.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Next: release candidate #1112
Merged
Next: release candidate #1112
Conversation
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
…ed in convert_mnist_data.cpp to lmdb
fix for tools/extra/parse_log.sh
convert MNIST demo to lmdb, fixes
Clean up pycaffe core
…radient and AdaGrad
TestGradientBasedSolver
(euclidean) error
test_iter (should be 100 instead of 50)
Note that we are dropping some checks from LRN layer. However, these checks are fairly redundant; something is very wrong if these layers are producing top blobs that are different sizes than their inputs, and tests are the right place to catch that. The thing that really should be checked (that isn't) is that that local_size needs to be odd; this will be added in a future commit.
Strictly speaking, Reshape doesn't need to be called until the first Forward call; however, much existing code (especially tests) assumes that top blobs will be set up in SetUp, so we may as well do it there.
Now that top blobs are set up in Layer::Reshape, it's Reshape that is mandatory, and simple layers often don't need to implement LayerSetUp. Reshape is (already) declared abstract, so not implementing it is a compile-time error.
Since we are now calling Reshape in the Forward pass, it's only fair to include it when timing. Reshape calls should normally be four or so orders of magnitude faster than Forward calls; this change also makes it easy to notice a mistake that causes something slow to happen in Reshape.
Note that it is not normally necessary to call this function when using reshapable nets, but sometimes it can be useful to compute the sizes of intermediate layers without waiting for the forward pass.
On-the-fly net resizing, without reallocation (where possible)
[model zoo] download gist script
- invoke by shell - default download dir to models/ - save to flat dir of owner-gist instead of nested owner/gist
Add contrastive loss layer, tests, and a siamese network example
Merged
shelhamer
added a commit
that referenced
this pull request
Sep 19, 2014
mitmul
pushed a commit
to mitmul/caffe
that referenced
this pull request
Sep 30, 2014
Next: release candidate
RazvanRanca
pushed a commit
to RazvanRanca/caffe
that referenced
this pull request
Nov 4, 2014
Next: release candidate
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
The next release packages up 400+ commits by 18 authors. Thanks all!
EltwiseLayer
operation.DOCUMENTATION: there is tutorial documentation and developer API documentation courtesy of Doxygen (thanks to Jeff!). The documentation, in particular the developer API, is still in-progress so come help and join the comment crusade!
DEPENDENCIES: CUDA 6.5 is the suggested version. cuDNN is an acceleration library for deep network operations with drop-in integration to Caffe. It is not required but suggested for best performance. See #1046.
DEPRECATION: transformation parameters now have their own configuration message to reduce duplication across the data layers. For instance
is now the proper format with the
transform_param
block. Old models are currently automagically upgraded but you should upgrade with the included toolsupgrade_net_proto_{text,binary}
.