the beta28 release will contain two new spread operations available for all major data types: zip and unzip. their operation is best explained by two screenshots of their help patches:
for performance reasons each zip and unzip comes with a normal and a bin sized version. for example if the task at hand is to zip two spreads slice by slice the normal version will be much faster than using the bin sized version and setting the bin size to one. so if possible use the normal version.
together with their bin sized versions and the ability to control the number of input (in case of zip) and output pins (in case of unzip) via a config pin they form a very versatile pack of nodes able to replace the functionality of various nodes from a simple vector join/split to the very famous (and probably hardest to find for beginners) Stallone node.
by making use of the newly introduced streams they are able to outshine their “competitors” by up to a factor of ten in terms of performance. the higher the slice count the faster they are compared to the rest. only at very low slice counts (< 25) their performance is not as good as for example a native vector join as with all plugins the transition from the unmanaged world of vvvv to the managed world of a plugin comes with a small overhead.
Comments:
Comments are no longer accepted for this post.
@Elias
So this would be better than a vector 3d join/split to manage for example a huge vertex buffer and apply some calculation in-between?
Tnx
nono bois, this is not a jukebox. the name is the right one. we don’t want to have fast and slow versions of nodes that do exactly the same.
it is important that when you need a vector (join/split) you use a vector (join/split) and not any other node that you heard is faster. because maybe in an upcoming version we have an optimization for vector (join/split) that is just as fast or even faster than zip/unzip and then you have to change all those nodes back.
so use zip/unzip when you need general un/zipping functionality else use vector (join/split) and bug us to improve their speed. one step at a time…