List view
Todo: * Create one readme.md * Write bump/publish script * Write script for updating changelog * Write script to update api docs * Clean up deprecating repos later on * [ ] standard-packages * [ ] base-package * [ ] file * [ ] gridfs * [ ] access-point * [ ] tempstore * [ ] collection * [ ] storage-adapter * [ ] data-man * [ ] collection-filters * [ ] worker * [ ] upload-http * [ ] filesystem * [ ] filesaver * [ ] s3
Overdue by 11 year(s)•Due by February 21, 2015•0/1 issues closedWe should be able to use client side storage when possible. 1. When uploading a file we should use the local file until the data is uploaded 2. On cordova, packaged apps etc. we actually have access to a filesystem, we should be able to utilize this 3. Caching and preloading of files requires some form of control with the client-side data Its tricky to handle files on the client-side, we have a lot of problems here in form of performance, synchronization, memory allocation etc. But we can make the app seem faster by doing forms of latency compensation, caching and preloading. Using blob object reference urls instead of dataUrls saves memory - and if used correctly we should be able to keep track of usages eg. ui helpers should have both a constructor and destructor function - or we could keep track of published files on a FS.Collection.
Overdue by 11 year(s)•Due by July 4, 2014•0/1 issues closedIf we have a storage adapter where a file get its data updated this change should be reflected back to cfs - if the sync flag is set in the SA's options. > The user can use `transformWrite` / `transformRead` making sure the SA can transform from / to versions if needed.
Overdue by 11 year(s)•Due by July 31, 2014•0/3 issues closedWe should have a better way for transporting file data, a generic streaming interface that could work with all protocols - http / sockets etc. - Just some thoughts on file upload - we did abandon ddp for file upload on the collectionFS. Reason was heavy penalties in performance and overhead on connection etc. But socket streams should be just as fast as http? but possible safer. I'm currently thinking it would be much cleaner to use streams instead of methods, maybe named streams so the server could handle different types of streams - instead of handling this in methods. This would make it easier to stream data directly from FileReader on to the server. Eg. ```js // Write the data FileReader.createReadStream().pipe(Meteor.createWriteStream('files', params)); // Write streams can be defined on both client and server // Eg. if the server wants to send data to the client. Meteor.Streams({ 'files': function(stream, params) { // Maybe check this.userId? // We would just write to the TempStore if (stream.readable) { readStream.pipe(fs.createWriteStream('/path/foo')); } else { return fs.createReadStream('/path/foo'); } } }); // Read the data Meteor.createReadStream('files', params).pipe(myBlob.createWriteStream()); ```
Overdue by 11 year(s)•Due by June 19, 2014•0/1 issues closedWhen scaling we should be able to have the data sent directly from the client remote storage adapters like S3. This feature is already prototyped in a private repo `cloudFS` TODO: * [ ] Implement the temporary redirect if a storage adapter provides a `getSignedUrl` for a file * [ ] Implement the signing method for upload access * [ ] Implement the direct upload - preferable a multipart upload - but for starters the method found in `cloudFS`
Overdue by 11 year(s)•Due by May 15, 2014•0/1 issues closedThe FileWorker architecture is based on workers / instances pulling tasks as their resources allow it. Reasons: 1. When scaling Meteor we will run on multiple instances using oplog to keep data in sync - but storing uploaded files in TempStore should be available to all instances (we can loose instances in some arcs) 2. The concept of transform streams allows for heavy analyze of data / files - so we would like to be able to isolate those not crashing the main servers We could scale by having multiple Meteor instances and use gridFS for TempStore (uploaded files) Tasks will be retrieved by using a request pattern via db - ensuring one worker pr. task. * [ ] Rework the cfs-worker implement better throttling
Overdue by 11 year(s)•Due by May 31, 2014•2/4 issues closedReasons: 1. Some OS's reset the temporary folder - removing all uploaded data (this is expected) 2. Not all deployments allow access to the os temp folder 3. We prepare for the fileWorker implementation - which allows us to scale The user can assign a storage adapter to the TempStore, The cfs-worker will have the TempStore default to gridfs if cfs-gridfs is added to the project - but it will not overwrite user configured settings. We may have a collection to track file uploads at some point - The TempStore is event based and allows others to listen for events. That said we have to have a way of tracking temporary files and status of upload, why a collection could be nice. The file workers could observe on this, or listen to events - again a flexible approach. - The TempStore presents a streaming api that allows data to be: * uploaded in chunks * passed directly from server api * synchronized from a store API Write stream `FS.TempStore.createWriteStream(fileObj);` `FS.TempStore.createWriteStream(fileObj, storeName);` `FS.TempStore.createWriteStream(fileObj, chunkNumber);` Read stream `FS.TempStore.createReadStream(fileObj);` Remove `FS.TempStore.removeFile(fileObj);` List chunks / parts `FS.TempStore.listParts(fileObj)` File exists `FS.TempStore.exists(fileObj)` - Events The TempStore is a node js EventEmitter and emits following events: `start` - callback( fileObj, chunkNumber ) `progress` - callback( fileObj, chunk, chunkCount ) `remove` - callback( fileObj, filePath ) *ready events* `uploaded` - callback( fileObj ) `stored` - callback( fileObj ) `synchronized` - callback( fileObj, storeName ) `ready` - callback( fileObj, options ) options can be: - `undefined` - direct stored from server api - `string` - name of Store - `number` - sum of chunks - TODO: We should have a collection to track files making our own temporary storage, removing files when we intent to and not depend on one type of storage. * [x] Convert the current implementation to use the FS.Store.FileSystem SA * [ ] Track files in a collection, and use `storageId` instead of `fileId` *`uploadId` is used by S3 but we do more than just upload, we have direct api and SA synchronization* * [ ] A file could be stored twice, make sure the latest win, eg. the old may not emit `ready` events * [ ] The tracking collection should have a `expireAt` on each file - This way we make sure to remove old data
Overdue by 11 year(s)•Due by April 17, 2014•3/3 issues closedThe final remaining items to be able to release the new api for CollectionFS. Its a complete rewrite and cannot be compared directly with "0.3.7". * [x] Server files securely over HTTP * [x] Have the concept of `Storeage Adapters` (SA's) S3/GridFS/FileSystem * [x] Use streams from endpoint to Storage Adapter * [x] FS.File - File data with api, makes it much easier to work with * [x] FS.File - EJSON Custom type - If you drop a FS.File in a Meteor.method or Meteor.Collection its converted to a small reference object etc. * [x] FS.Data - Allows one to attach data like file/blob/Buffer/url/path/readStream etc. * [x] Use official GridFS api * [x] Use official S3 api * [x] Transformation streams (transformWrite/transformRead) * [x] Direct gm (graphicMagic) api in transformation streams * [x] Better throttling for uploads/downloads * [x] Have the project split into small modular packages making it easier to debug and maintain * [ ] Add s3cloud, client storage adapter provides uploader and signed url on the server * [ ] Write tests - Its not released until this field is checked :)
Overdue by 11 year(s)•Due by March 31, 2014•32/42 issues closed