· 9 years ago · Oct 14, 2016, 09:42 PM
1rclone(1) User Manual
2
3Nick Craig-Wood
4
5Aug 24, 2016
6
7Rclone
8
9Logo
10
11Rclone is a command line program to sync files and directories to and from
12
13Google Drive
14Amazon S3
15Openstack Swift / Rackspace cloud files / Memset Memstore
16Dropbox
17Google Cloud Storage
18Amazon Drive
19Microsoft One Drive
20Hubic
21Backblaze B2
22Yandex Disk
23The local filesystem
24Features
25
26MD5/SHA1 hashes checked at all times for file integrity
27Timestamps preserved on files
28Partial syncs supported on a whole file basis
29Copy mode to just copy new/changed files
30Sync (one way) mode to make a directory identical
31Check mode to check for file hash equality
32Can sync to and from network, eg two different cloud accounts
33Optional encryption (Crypt)
34Optional FUSE mount (rclone mount)
35Links
36
37Home page
38Github project page for source and bug tracker
39Google+ page
40Downloads
41Install
42
43Rclone is a Go program and comes as a single binary file.
44
45Download the relevant binary.
46
47Or alternatively if you have Go 1.5+ installed use
48
49go get github.com/ncw/rclone
50and this will build the binary in $GOPATH/bin. If you have built rclone before then you will want to update its dependencies first with this
51
52go get -u -v github.com/ncw/rclone/...
53See the Usage section of the docs for how to use rclone, or run rclone -h.
54
55linux binary downloaded files install example
56
57unzip rclone-v1.17-linux-amd64.zip
58cd rclone-v1.17-linux-amd64
59#copy binary file
60sudo cp rclone /usr/sbin/
61sudo chown root:root /usr/sbin/rclone
62sudo chmod 755 /usr/sbin/rclone
63#install manpage
64sudo mkdir -p /usr/local/share/man/man1
65sudo cp rclone.1 /usr/local/share/man/man1/
66sudo mandb
67Installation with Ansible
68
69This can be done with Stefan Weichinger's ansible role.
70
71Instructions
72
73git clone https://github.com/stefangweichinger/ansible-rclone.git into your local roles-directory
74add the role to the hosts you want rclone installed to:
75 - hosts: rclone-hosts
76 roles:
77 - rclone
78Configure
79
80First you'll need to configure rclone. As the object storage systems have quite complicated authentication these are kept in a config file .rclone.conf in your home directory by default. (You can use the --config option to choose a different config file.)
81
82The easiest way to make the config is to run rclone with the config option:
83
84rclone config
85See the following for detailed instructions for
86
87Google drive
88Amazon S3
89Swift / Rackspace Cloudfiles / Memset Memstore
90Dropbox
91Google Cloud Storage
92Local filesystem
93Amazon Drive
94Backblaze B2
95Hubic
96Microsoft One Drive
97Yandex Disk
98Crypt - to encrypt other remotes
99Usage
100
101Rclone syncs a directory tree from one storage system to another.
102
103Its syntax is like this
104
105Syntax: [options] subcommand <parameters> <parameters...>
106Source and destination paths are specified by the name you gave the storage system in the config file then the sub path, eg "drive:myfolder" to look at "myfolder" in Google drive.
107
108You can define as many storage paths as you like in the config file.
109
110Subcommands
111
112rclone uses a system of subcommands. For example
113
114rclone ls remote:path # lists a re
115rclone copy /local/path remote:path # copies /local/path to the remote
116rclone sync /local/path remote:path # syncs /local/path to the remote
117rclone config
118
119Enter an interactive configuration session.
120
121Synopsis
122
123Enter an interactive configuration session.
124
125rclone config
126rclone copy
127
128Copy files from source to dest, skipping already copied
129
130Synopsis
131
132Copy the source to the destination. Doesn't transfer unchanged files, testing by size and modification time or MD5SUM. Doesn't delete files from the destination.
133
134Note that it is always the contents of the directory that is synced, not the directory so when source:path is a directory, it's the contents of source:path that are copied, not the directory name and contents.
135
136If dest:path doesn't exist, it is created and the source:path contents go there.
137
138For example
139
140rclone copy source:sourcepath dest:destpath
141Let's say there are two files in sourcepath
142
143sourcepath/one.txt
144sourcepath/two.txt
145This copies them to
146
147destpath/one.txt
148destpath/two.txt
149Not to
150
151destpath/sourcepath/one.txt
152destpath/sourcepath/two.txt
153If you are familiar with rsync, rclone always works as if you had written a trailing / - meaning "copy the contents of this directory". This applies to all commands and whether you are talking about the source or destination.
154
155See the --no-traverse option for controlling whether rclone lists the destination directory or not.
156
157rclone copy source:path dest:path
158rclone sync
159
160Make source and dest identical, modifying destination only.
161
162Synopsis
163
164Sync the source to the destination, changing the destination only. Doesn't transfer unchanged files, testing by size and modification time or MD5SUM. Destination is updated to match source, including deleting files if necessary.
165
166Important: Since this can cause data loss, test first with the --dry-run flag to see exactly what would be copied and deleted.
167
168Note that files in the destination won't be deleted if there were any errors at any point.
169
170It is always the contents of the directory that is synced, not the directory so when source:path is a directory, it's the contents of source:path that are copied, not the directory name and contents. See extended explanation in the copy command above if unsure.
171
172If dest:path doesn't exist, it is created and the source:path contents go there.
173
174rclone sync source:path dest:path
175rclone move
176
177Move files from source to dest.
178
179Synopsis
180
181Moves the contents of the source directory to the destination directory. Rclone will error if the source and destination overlap.
182
183If no filters are in use and if possible this will server side move source:path into dest:path. After this source:path will no longer longer exist.
184
185Otherwise for each file in source:path selected by the filters (if any) this will move it into dest:path. If possible a server side move will be used, otherwise it will copy it (server side if possible) into dest:path then delete the original (if no errors on copy) in source:path.
186
187Important: Since this can cause data loss, test first with the --dry-run flag.
188
189rclone move source:path dest:path
190rclone delete
191
192Remove the contents of path.
193
194Synopsis
195
196Remove the contents of path. Unlike purge it obeys include/exclude filters so can be used to selectively delete files.
197
198Eg delete all files bigger than 100MBytes
199
200Check what would be deleted first (use either)
201
202rclone --min-size 100M lsl remote:path
203rclone --dry-run --min-size 100M delete remote:path
204Then delete
205
206rclone --min-size 100M delete remote:path
207That reads "delete everything with a minimum size of 100 MB", hence delete all files bigger than 100MBytes.
208
209rclone delete remote:path
210rclone purge
211
212Remove the path and all of its contents.
213
214Synopsis
215
216Remove the path and all of its contents. Note that this does not obey include/exclude filters - everything will be removed. Use delete if you want to selectively delete files.
217
218rclone purge remote:path
219rclone mkdir
220
221Make the path if it doesn't already exist.
222
223Synopsis
224
225Make the path if it doesn't already exist.
226
227rclone mkdir remote:path
228rclone rmdir
229
230Remove the path if empty.
231
232Synopsis
233
234Remove the path. Note that you can't remove a path with objects in it, use purge for that.
235
236rclone rmdir remote:path
237rclone check
238
239Checks the files in the source and destination match.
240
241Synopsis
242
243Checks the files in the source and destination match. It compares sizes and MD5SUMs and prints a report of files which don't match. It doesn't alter the source or destination.
244
245--size-only may be used to only compare the sizes, not the MD5SUMs.
246
247rclone check source:path dest:path
248rclone ls
249
250List all the objects in the the path with size and path.
251
252Synopsis
253
254List all the objects in the the path with size and path.
255
256rclone ls remote:path
257rclone lsd
258
259List all directories/containers/buckets in the the path.
260
261Synopsis
262
263List all directories/containers/buckets in the the path.
264
265rclone lsd remote:path
266rclone lsl
267
268List all the objects path with modification time, size and path.
269
270Synopsis
271
272List all the objects path with modification time, size and path.
273
274rclone lsl remote:path
275rclone md5sum
276
277Produces an md5sum file for all the objects in the path.
278
279Synopsis
280
281Produces an md5sum file for all the objects in the path. This is in the same format as the standard md5sum tool produces.
282
283rclone md5sum remote:path
284rclone sha1sum
285
286Produces an sha1sum file for all the objects in the path.
287
288Synopsis
289
290Produces an sha1sum file for all the objects in the path. This is in the same format as the standard sha1sum tool produces.
291
292rclone sha1sum remote:path
293rclone size
294
295Prints the total size and number of objects in remote:path.
296
297Synopsis
298
299Prints the total size and number of objects in remote:path.
300
301rclone size remote:path
302rclone version
303
304Show the version number.
305
306Synopsis
307
308Show the version number.
309
310rclone version
311rclone cleanup
312
313Clean up the remote if possible
314
315Synopsis
316
317Clean up the remote if possible. Empty the trash or delete old file versions. Not supported by all remotes.
318
319rclone cleanup remote:path
320rclone dedupe
321
322Interactively find duplicate files delete/rename them.
323
324Synopsis
325
326By default dedup interactively finds duplicate files and offers to delete all but one or rename them to be different. Only useful with Google Drive which can have duplicate file names.
327
328The dedupe command will delete all but one of any identical (same md5sum) files it finds without confirmation. This means that for most duplicated files the dedupe command will not be interactive. You can use --dry-run to see what would happen without doing anything.
329
330Here is an example run.
331
332Before - with duplicates
333
334$ rclone lsl drive:dupes
335 6048320 2016-03-05 16:23:16.798000000 one.txt
336 6048320 2016-03-05 16:23:11.775000000 one.txt
337 564374 2016-03-05 16:23:06.731000000 one.txt
338 6048320 2016-03-05 16:18:26.092000000 one.txt
339 6048320 2016-03-05 16:22:46.185000000 two.txt
340 1744073 2016-03-05 16:22:38.104000000 two.txt
341 564374 2016-03-05 16:22:52.118000000 two.txt
342Now the dedupe session
343
344$ rclone dedupe drive:dupes
3452016/03/05 16:24:37 Google drive root 'dupes': Looking for duplicates using interactive mode.
346one.txt: Found 4 duplicates - deleting identical copies
347one.txt: Deleting 2/3 identical duplicates (md5sum "1eedaa9fe86fd4b8632e2ac549403b36")
348one.txt: 2 duplicates remain
349 1: 6048320 bytes, 2016-03-05 16:23:16.798000000, md5sum 1eedaa9fe86fd4b8632e2ac549403b36
350 2: 564374 bytes, 2016-03-05 16:23:06.731000000, md5sum 7594e7dc9fc28f727c42ee3e0749de81
351s) Skip and do nothing
352k) Keep just one (choose which in next step)
353r) Rename all to be different (by changing file.jpg to file-1.jpg)
354s/k/r> k
355Enter the number of the file to keep> 1
356one.txt: Deleted 1 extra copies
357two.txt: Found 3 duplicates - deleting identical copies
358two.txt: 3 duplicates remain
359 1: 564374 bytes, 2016-03-05 16:22:52.118000000, md5sum 7594e7dc9fc28f727c42ee3e0749de81
360 2: 6048320 bytes, 2016-03-05 16:22:46.185000000, md5sum 1eedaa9fe86fd4b8632e2ac549403b36
361 3: 1744073 bytes, 2016-03-05 16:22:38.104000000, md5sum 851957f7fb6f0bc4ce76be966d336802
362s) Skip and do nothing
363k) Keep just one (choose which in next step)
364r) Rename all to be different (by changing file.jpg to file-1.jpg)
365s/k/r> r
366two-1.txt: renamed from: two.txt
367two-2.txt: renamed from: two.txt
368two-3.txt: renamed from: two.txt
369The result being
370
371$ rclone lsl drive:dupes
372 6048320 2016-03-05 16:23:16.798000000 one.txt
373 564374 2016-03-05 16:22:52.118000000 two-1.txt
374 6048320 2016-03-05 16:22:46.185000000 two-2.txt
375 1744073 2016-03-05 16:22:38.104000000 two-3.txt
376Dedupe can be run non interactively using the --dedupe-mode flag or by using an extra parameter with the same value
377
378--dedupe-mode interactive - interactive as above.
379--dedupe-mode skip - removes identical files then skips anything left.
380--dedupe-mode first - removes identical files then keeps the first one.
381--dedupe-mode newest - removes identical files then keeps the newest one.
382--dedupe-mode oldest - removes identical files then keeps the oldest one.
383--dedupe-mode rename - removes identical files then renames the rest to be different.
384For example to rename all the identically named photos in your Google Photos directory, do
385
386rclone dedupe --dedupe-mode rename "drive:Google Photos"
387Or
388
389rclone dedupe rename "drive:Google Photos"
390rclone dedupe [mode] remote:path
391Options
392
393 --dedupe-mode string Dedupe mode interactive|skip|first|newest|oldest|rename.
394rclone authorize
395
396Remote authorization.
397
398Synopsis
399
400Remote authorization. Used to authorize a remote or headless rclone from a machine with a browser - use as instructed by rclone config.
401
402rclone authorize
403rclone cat
404
405Concatenates any files and sends them to stdout.
406
407Synopsis
408
409rclone cat sends any files to standard output.
410
411You can use it like this to output a single file
412
413rclone cat remote:path/to/file
414Or like this to output any file in dir or subdirectories.
415
416rclone cat remote:path/to/dir
417Or like this to output any .txt files in dir or subdirectories.
418
419rclone --include "*.txt" cat remote:path/to/dir
420rclone cat remote:path
421rclone genautocomplete
422
423Output bash completion script for rclone.
424
425Synopsis
426
427Generates a bash shell autocompletion script for rclone.
428
429This writes to /etc/bash_completion.d/rclone by default so will probably need to be run with sudo or as root, eg
430
431sudo rclone genautocomplete
432Logout and login again to use the autocompletion scripts, or source them directly
433
434. /etc/bash_completion
435If you supply a command line argument the script will be written there.
436
437rclone genautocomplete [output_file]
438rclone gendocs
439
440Output markdown docs for rclone to the directory supplied.
441
442Synopsis
443
444This produces markdown docs for the rclone commands to the directory supplied. These are in a format suitable for hugo to render into the rclone.org website.
445
446rclone gendocs output_directory
447rclone mount
448
449Mount the remote as a mountpoint. EXPERIMENTAL
450
451Synopsis
452
453rclone mount allows Linux, FreeBSD and macOS to mount any of Rclone's cloud storage systems as a file system with FUSE.
454
455This is EXPERIMENTAL - use with care.
456
457First set up your remote using rclone config. Check it works with rclone ls etc.
458
459Start the mount like this
460
461rclone mount remote:path/to/files /path/to/local/mount &
462Stop the mount with
463
464fusermount -u /path/to/local/mount
465Or with OS X
466
467umount -u /path/to/local/mount
468Limitations
469
470This can only read files seqentially, or write files sequentially. It can't read and write or seek in files.
471
472rclonefs inherits rclone's directory handling. In rclone's world directories don't really exist. This means that empty directories will have a tendency to disappear once they fall out of the directory cache.
473
474The bucket based FSes (eg swift, s3, google compute storage, b2) won't work from the root - you will need to specify a bucket, or a path within the bucket. So swift: won't work whereas swift:bucket will as will swift:bucket/path.
475
476Only supported on Linux, FreeBSD and OS X at the moment.
477
478rclone mount vs rclone sync/copy
479
480File systems expect things to be 100% reliable, whereas cloud storage systems are a long way from 100% reliable. The rclone sync/copy commands cope with this with lots of retries. However rclone mount can't use retries in the same way without making local copies of the uploads. This might happen in the future, but for the moment rclone mount won't do that, so will be less reliable than the rclone command.
481
482Bugs
483
484All the remotes should work for read, but some may not for write
485those which need to know the size in advance won't - eg B2
486maybe should pass in size as -1 to mean work it out
487TODO
488
489Check hashes on upload/download
490Preserve timestamps
491Move directories
492rclone mount remote:path /path/to/mountpoint
493Options
494
495 --debug-fuse Debug the FUSE internals - needs -v.
496 --no-modtime Don't read the modification time (can speed things up).
497Copying single files
498
499rclone normally syncs or copies directories. However if the source remote points to a file, rclone will just copy that file. The destination remote must point to a directory - rclone will give the error Failed to create file system for "remote:file": is a file not a directory if it isn't.
500
501For example, suppose you have a remote with a file in called test.jpg, then you could copy just that file like this
502
503rclone copy remote:test.jpg /tmp/download
504The file test.jpg will be placed inside /tmp/download.
505
506This is equivalent to specifying
507
508rclone copy --no-traverse --files-from /tmp/files remote: /tmp/download
509Where /tmp/files contains the single line
510
511test.jpg
512It is recommended to use copy when copying single files not sync. They have pretty much the same effect but copy will use a lot less memory.
513
514Quoting and the shell
515
516When you are typing commands to your computer you are using something called the command line shell. This interprets various characters in an OS specific way.
517
518Here are some gotchas which may help users unfamiliar with the shell rules
519
520Linux / OSX
521
522If your names have spaces or shell metacharacters (eg *, ?, $, ', " etc) then you must quote them. Use single quotes ' by default.
523
524rclone copy 'Important files?' remote:backup
525If you want to send a ' you will need to use ", eg
526
527rclone copy "O'Reilly Reviews" remote:backup
528The rules for quoting metacharacters are complicated and if you want the full details you'll have to consult the manual page for your shell.
529
530Windows
531
532If your names have spaces in you need to put them in ", eg
533
534rclone copy "E:\folder name\folder name\folder name" remote:backup
535If you are using the root directory on its own then don't quote it (see #464 for why), eg
536
537rclone copy E:\ remote:backup
538Server Side Copy
539
540Drive, S3, Dropbox, Swift and Google Cloud Storage support server side copy.
541
542This means if you want to copy one folder to another then rclone won't download all the files and re-upload them; it will instruct the server to copy them in place.
543
544Eg
545
546rclone copy s3:oldbucket s3:newbucket
547Will copy the contents of oldbucket to newbucket without downloading and re-uploading.
548
549Remotes which don't support server side copy (eg local) will download and re-upload in this case.
550
551Server side copies are used with sync and copy and will be identified in the log when using the -v flag.
552
553Server side copies will only be attempted if the remote names are the same.
554
555This can be used when scripting to make aged backups efficiently, eg
556
557rclone sync remote:current-backup remote:previous-backup
558rclone sync /path/to/files remote:current-backup
559Options
560
561Rclone has a number of options to control its behaviour.
562
563Options which use TIME use the go time parser. A duration string is a possibly signed sequence of decimal numbers, each with optional fraction and a unit suffix, such as "300ms", "-1.5h" or "2h45m". Valid time units are "ns", "us" (or "�s"), "ms", "s", "m", "h".
564
565Options which use SIZE use kByte by default. However a suffix of b for bytes, k for kBytes, M for MBytes and G for GBytes may be used. These are the binary units, eg 1, 2**10, 2**20, 2**30 respectively.
566
567--bwlimit=SIZE
568
569Bandwidth limit in kBytes/s, or use suffix b|k|M|G. The default is 0 which means to not limit bandwidth.
570
571For example to limit bandwidth usage to 10 MBytes/s use --bwlimit 10M
572
573This only limits the bandwidth of the data transfer, it doesn't limit the bandwith of the directory listings etc.
574
575--checkers=N
576
577The number of checkers to run in parallel. Checkers do the equality checking of files during a sync. For some storage systems (eg s3, swift, dropbox) this can take a significant amount of time so they are run in parallel.
578
579The default is to run 8 checkers in parallel.
580
581-c, --checksum
582
583Normally rclone will look at modification time and size of files to see if they are equal. If you set this flag then rclone will check the file hash and size to determine if files are equal.
584
585This is useful when the remote doesn't support setting modified time and a more accurate sync is desired than just checking the file size.
586
587This is very useful when transferring between remotes which store the same hash type on the object, eg Drive and Swift. For details of which remotes support which hash type see the table in the overview section.
588
589Eg rclone --checksum sync s3:/bucket swift:/bucket would run much quicker than without the --checksum flag.
590
591When using this flag, rclone won't update mtimes of remote files if they are incorrect as it would normally.
592
593--config=CONFIG_FILE
594
595Specify the location of the rclone config file. Normally this is in your home directory as a file called .rclone.conf. If you run rclone -h and look at the help for the --config option you will see where the default location is for you. Use this flag to override the config location, eg rclone --config=".myconfig" .config.
596
597--contimeout=TIME
598
599Set the connection timeout. This should be in go time format which looks like 5s for 5 seconds, 10m for 10 minutes, or 3h30m.
600
601The connection timeout is the amount of time rclone will wait for a connection to go through to a remote object storage system. It is 1m by default.
602
603--dedupe-mode MODE
604
605Mode to run dedupe command in. One of interactive, skip, first, newest, oldest, rename. The default is interactive. See the dedupe command for more information as to what these options mean.
606
607-n, --dry-run
608
609Do a trial run with no permanent changes. Use this to see what rclone would do without actually doing it. Useful when setting up the sync command which deletes files in the destination.
610
611--ignore-existing
612
613Using this option will make rclone unconditionally skip all files that exist on the destination, no matter the content of these files.
614
615While this isn't a generally recommended option, it can be useful in cases where your files change due to encryption. However, it cannot correct partial transfers in case a transfer was interrupted.
616
617--ignore-size
618
619Normally rclone will look at modification time and size of files to see if they are equal. If you set this flag then rclone will check only the modification time. If --checksum is set then it only checks the checksum.
620
621It will also cause rclone to skip verifying the sizes are the same after transfer.
622
623This can be useful for transferring files to and from onedrive which occasionally misreports the size of image files (see #399 for more info).
624
625-I, --ignore-times
626
627Using this option will cause rclone to unconditionally upload all files regardless of the state of files on the destination.
628
629Normally rclone would skip any files that have the same modification time and are the same size (or have the same checksum if using --checksum).
630
631--log-file=FILE
632
633Log all of rclone's output to FILE. This is not active by default. This can be useful for tracking down problems with syncs in combination with the -v flag. See the Logging section for more info.
634
635--low-level-retries NUMBER
636
637This controls the number of low level retries rclone does.
638
639A low level retry is used to retry a failing operation - typically one HTTP request. This might be uploading a chunk of a big file for example. You will see low level retries in the log with the -v flag.
640
641This shouldn't need to be changed from the default in normal operations, however if you get a lot of low level retries you may wish to reduce the value so rclone moves on to a high level retry (see the --retries flag) quicker.
642
643Disable low level retries with --low-level-retries 1.
644
645--max-depth=N
646
647This modifies the recursion depth for all the commands except purge.
648
649So if you do rclone --max-depth 1 ls remote:path you will see only the files in the top level directory. Using --max-depth 2 means you will see all the files in first two directory levels and so on.
650
651For historical reasons the lsd command defaults to using a --max-depth of 1 - you can override this with the command line flag.
652
653You can use this command to disable recursion (with --max-depth 1).
654
655Note that if you use this with sync and --delete-excluded the files not recursed through are considered excluded and will be deleted on the destination. Test first with --dry-run if you are not sure what will happen.
656
657--modify-window=TIME
658
659When checking whether a file has been modified, this is the maximum allowed time difference that a file can have and still be considered equivalent.
660
661The default is 1ns unless this is overridden by a remote. For example OS X only stores modification times to the nearest second so if you are reading and writing to an OS X filing system this will be 1s by default.
662
663This command line flag allows you to override that computed default.
664
665--no-gzip-encoding
666
667Don't set Accept-Encoding: gzip. This means that rclone won't ask the server for compressed files automatically. Useful if you've set the server to return files with Content-Encoding: gzip but you uploaded compressed files.
668
669There is no need to set this in normal operation, and doing so will decrease the network transfer efficiency of rclone.
670
671--no-update-modtime
672
673When using this flag, rclone won't update modification times of remote files if they are incorrect as it would normally.
674
675This can be used if the remote is being synced with another tool also (eg the Google Drive client).
676
677-q, --quiet
678
679Normally rclone outputs stats and a completion message. If you set this flag it will make as little output as possible.
680
681--retries int
682
683Retry the entire sync if it fails this many times it fails (default 3).
684
685Some remotes can be unreliable and a few retries helps pick up the files which didn't get transferred because of errors.
686
687Disable retries with --retries 1.
688
689--size-only
690
691Normally rclone will look at modification time and size of files to see if they are equal. If you set this flag then rclone will check only the size.
692
693This can be useful transferring files from dropbox which have been modified by the desktop sync client which doesn't set checksums of modification times in the same way as rclone.
694
695--stats=TIME
696
697Rclone will print stats at regular intervals to show its progress.
698
699This sets the interval.
700
701The default is 1m. Use 0 to disable.
702
703--delete-(before,during,after)
704
705This option allows you to specify when files on your destination are deleted when you sync folders.
706
707Specifying the value --delete-before will delete all files present on the destination, but not on the source before starting the transfer of any new or updated files. This uses extra memory as it has to store the source listing before proceeding.
708
709Specifying --delete-during (default value) will delete files while checking and uploading files. This is usually the fastest option. Currently this works the same as --delete-after but it may change in the future.
710
711Specifying --delete-after will delay deletion of files until all new/updated files have been successfully transfered.
712
713--timeout=TIME
714
715This sets the IO idle timeout. If a transfer has started but then becomes idle for this long it is considered broken and disconnected.
716
717The default is 5m. Set to 0 to disable.
718
719--transfers=N
720
721The number of file transfers to run in parallel. It can sometimes be useful to set this to a smaller number if the remote is giving a lot of timeouts or bigger if you have lots of bandwidth and a fast remote.
722
723The default is to run 4 file transfers in parallel.
724
725-u, --update
726
727This forces rclone to skip any files which exist on the destination and have a modified time that is newer than the source file.
728
729If an existing destination file has a modification time equal (within the computed modify window precision) to the source file's, it will be updated if the sizes are different.
730
731On remotes which don't support mod time directly the time checked will be the uploaded time. This means that if uploading to one of these remoes, rclone will skip any files which exist on the destination and have an uploaded time that is newer than the modification time of the source file.
732
733This can be useful when transferring to a remote which doesn't support mod times directly as it is more accurate than a --size-only check and faster than using --checksum.
734
735-v, --verbose
736
737If you set this flag, rclone will become very verbose telling you about every file it considers and transfers.
738
739Very useful for debugging.
740
741-V, --version
742
743Prints the version number
744
745Configuration Encryption
746
747Your configuration file contains information for logging in to your cloud services. This means that you should keep your .rclone.conf file in a secure location.
748
749If you are in an environment where that isn't possible, you can add a password to your configuration. This means that you will have to enter the password every time you start rclone.
750
751To add a password to your rclone configuration, execute rclone config.
752
753>rclone config
754Current remotes:
755
756e) Edit existing remote
757n) New remote
758d) Delete remote
759s) Set configuration password
760q) Quit config
761e/n/d/s/q>
762Go into s, Set configuration password:
763
764e/n/d/s/q> s
765Your configuration is not encrypted.
766If you add a password, you will protect your login information to cloud services.
767a) Add Password
768q) Quit to main menu
769a/q> a
770Enter NEW configuration password:
771password:
772Confirm NEW password:
773password:
774Password set
775Your configuration is encrypted.
776c) Change Password
777u) Unencrypt configuration
778q) Quit to main menu
779c/u/q>
780Your configuration is now encrypted, and every time you start rclone you will now be asked for the password. In the same menu you can change the password or completely remove encryption from your configuration.
781
782There is no way to recover the configuration if you lose your password.
783
784rclone uses nacl secretbox which in turn uses XSalsa20 and Poly1305 to encrypt and authenticate your configuration with secret-key cryptography. The password is SHA-256 hashed, which produces the key for secretbox. The hashed password is not stored.
785
786While this provides very good security, we do not recommend storing your encrypted rclone configuration in public if it contains sensitive information, maybe except if you use a very strong password.
787
788If it is safe in your environment, you can set the RCLONE_CONFIG_PASS environment variable to contain your password, in which case it will be used for decrypting the configuration.
789
790If you are running rclone inside a script, you might want to disable password prompts. To do that, pass the parameter --ask-password=false to rclone. This will make rclone fail instead of asking for a password if RCLONE_CONFIG_PASS doesn't contain a valid password.
791
792Developer options
793
794These options are useful when developing or debugging rclone. There are also some more remote specific options which aren't documented here which are used for testing. These start with remote name eg --drive-test-option - see the docs for the remote in question.
795
796--cpuprofile=FILE
797
798Write CPU profile to file. This can be analysed with go tool pprof.
799
800--dump-bodies
801
802Dump HTTP headers and bodies - may contain sensitive info. Can be very verbose. Useful for debugging only.
803
804--dump-filters
805
806Dump the filters to the output. Useful to see exactly what include and exclude options are filtering on.
807
808--dump-headers
809
810Dump HTTP headers - may contain sensitive info. Can be very verbose. Useful for debugging only.
811
812--memprofile=FILE
813
814Write memory profile to file. This can be analysed with go tool pprof.
815
816--no-check-certificate=true/false
817
818--no-check-certificate controls whether a client verifies the server's certificate chain and host name. If --no-check-certificate is true, TLS accepts any certificate presented by the server and any host name in that certificate. In this mode, TLS is susceptible to man-in-the-middle attacks.
819
820This option defaults to false.
821
822This should be used only for testing.
823
824--no-traverse
825
826The --no-traverse flag controls whether the destination file system is traversed when using the copy or move commands.
827
828If you are only copying a small number of files and/or have a large number of files on the destination then --no-traverse will stop rclone listing the destination and save time.
829
830However if you are copying a large number of files, escpecially if you are doing a copy where lots of the files haven't changed and won't need copying then you shouldn't use --no-traverse.
831
832It can also be used to reduce the memory usage of rclone when copying - rclone --no-traverse copy src dst won't load either the source or destination listings into memory so will use the minimum amount of memory.
833
834Filtering
835
836For the filtering options
837
838--delete-excluded
839--filter
840--filter-from
841--exclude
842--exclude-from
843--include
844--include-from
845--files-from
846--min-size
847--max-size
848--min-age
849--max-age
850--dump-filters
851See the filtering section.
852
853Logging
854
855rclone has 3 levels of logging, Error, Info and Debug.
856
857By default rclone logs Error and Info to standard error and Debug to standard output. This means you can redirect standard output and standard error to different places.
858
859By default rclone will produce Error and Info level messages.
860
861If you use the -q flag, rclone will only produce Error messages.
862
863If you use the -v flag, rclone will produce Error, Info and Debug messages.
864
865If you use the --log-file=FILE option, rclone will redirect Error, Info and Debug messages along with standard error to FILE.
866
867Exit Code
868
869If any errors occurred during the command, rclone will set a non zero exit code. This allows scripts to detect when rclone operations have failed.
870
871Configuring rclone on a remote / headless machine
872
873Some of the configurations (those involving oauth2) require an Internet connected web browser.
874
875If you are trying to set rclone up on a remote or headless box with no browser available on it (eg a NAS or a server in a datacenter) then you will need to use an alternative means of configuration. There are two ways of doing it, described below.
876
877Configuring using rclone authorize
878
879On the headless box
880
881...
882Remote config
883Use auto config?
884 * Say Y if not sure
885 * Say N if you are working on a remote or headless machine
886y) Yes
887n) No
888y/n> n
889For this to work, you will need rclone available on a machine that has a web browser available.
890Execute the following on your machine:
891 rclone authorize "amazon cloud drive"
892Then paste the result below:
893result>
894Then on your main desktop machine
895
896rclone authorize "amazon cloud drive"
897If your browser doesn't open automatically go to the following link: http://127.0.0.1:53682/auth
898Log in and authorize rclone for access
899Waiting for code...
900Got code
901Paste the following into your remote machine --->
902SECRET_TOKEN
903<---End paste
904Then back to the headless box, paste in the code
905
906result> SECRET_TOKEN
907--------------------
908[acd12]
909client_id =
910client_secret =
911token = SECRET_TOKEN
912--------------------
913y) Yes this is OK
914e) Edit this remote
915d) Delete this remote
916y/e/d>
917Configuring by copying the config file
918
919Rclone stores all of its config in a single configuration file. This can easily be copied to configure a remote rclone.
920
921So first configure rclone on your desktop machine
922
923rclone config
924to set up the config file.
925
926Find the config file by running rclone -h and looking for the help for the --config option
927
928$ rclone -h
929[snip]
930 --config="/home/user/.rclone.conf": Config file.
931[snip]
932Now transfer it to the remote box (scp, cut paste, ftp, sftp etc) and place it in the correct place (use rclone -h on the remote box to find out where).
933
934Filtering, includes and excludes
935
936Rclone has a sophisticated set of include and exclude rules. Some of these are based on patterns and some on other things like file size.
937
938The filters are applied for the copy, sync, move, ls, lsl, md5sum, sha1sum, size, delete and check operations. Note that purge does not obey the filters.
939
940Each path as it passes through rclone is matched against the include and exclude rules like --include, --exclude, --include-from, --exclude-from, --filter, or --filter-from. The simplest way to try them out is using the ls command, or --dry-run together with -v.
941
942Important Due to limitations of the command line parser you can only use any of these options once - if you duplicate them then rclone will use the last one only.
943
944Patterns
945
946The patterns used to match files for inclusion or exclusion are based on "file globs" as used by the unix shell.
947
948If the pattern starts with a / then it only matches at the top level of the directory tree, relative to the root of the remote. If it doesn't start with / then it is matched starting at the end of the path, but it will only match a complete path element:
949
950file.jpg - matches "file.jpg"
951 - matches "directory/file.jpg"
952 - doesn't match "afile.jpg"
953 - doesn't match "directory/afile.jpg"
954/file.jpg - matches "file.jpg" in the root directory of the remote
955 - doesn't match "afile.jpg"
956 - doesn't match "directory/file.jpg"
957Important Note that you must use / in patterns and not \ even if running on Windows.
958
959A * matches anything but not a /.
960
961*.jpg - matches "file.jpg"
962 - matches "directory/file.jpg"
963 - doesn't match "file.jpg/something"
964Use ** to match anything, including slashes (/).
965
966dir/** - matches "dir/file.jpg"
967 - matches "dir/dir1/dir2/file.jpg"
968 - doesn't match "directory/file.jpg"
969 - doesn't match "adir/file.jpg"
970A ? matches any character except a slash /.
971
972l?ss - matches "less"
973 - matches "lass"
974 - doesn't match "floss"
975A [ and ] together make a a character class, such as [a-z] or [aeiou] or [[:alpha:]]. See the go regexp docs for more info on these.
976
977h[ae]llo - matches "hello"
978 - matches "hallo"
979 - doesn't match "hullo"
980A { and } define a choice between elements. It should contain a comma seperated list of patterns, any of which might match. These patterns can contain wildcards.
981
982{one,two}_potato - matches "one_potato"
983 - matches "two_potato"
984 - doesn't match "three_potato"
985 - doesn't match "_potato"
986Special characters can be escaped with a \ before them.
987
988\*.jpg - matches "*.jpg"
989\\.jpg - matches "\.jpg"
990\[one\].jpg - matches "[one].jpg"
991Note also that rclone filter globs can only be used in one of the filter command line flags, not in the specification of the remote, so rclone copy "remote:dir*.jpg" /path/to/dir won't work - what is required is rclone --include "*.jpg" copy remote:dir /path/to/dir
992
993Directories
994
995Rclone keeps track of directories that could match any file patterns.
996
997Eg if you add the include rule
998
999\a\*.jpg
1000Rclone will synthesize the directory include rule
1001
1002\a\
1003If you put any rules which end in \ then it will only match directories.
1004
1005Directory matches are only used to optimise directory access patterns - you must still match the files that you want to match. Directory matches won't optimise anything on bucket based remotes (eg s3, swift, google compute storage, b2) which don't have a concept of directory.
1006
1007Differences between rsync and rclone patterns
1008
1009Rclone implements bash style {a,b,c} glob matching which rsync doesn't.
1010
1011Rclone always does a wildcard match so \ must always escape a \.
1012
1013How the rules are used
1014
1015Rclone maintains a list of include rules and exclude rules.
1016
1017Each file is matched in order against the list until it finds a match. The file is then included or excluded according to the rule type.
1018
1019If the matcher falls off the bottom of the list then the path is included.
1020
1021For example given the following rules, + being include, - being exclude,
1022
1023- secret*.jpg
1024+ *.jpg
1025+ *.png
1026+ file2.avi
1027- *
1028This would include
1029
1030file1.jpg
1031file3.png
1032file2.avi
1033This would exclude
1034
1035secret17.jpg
1036non *.jpg and *.png
1037A similar process is done on directory entries before recursing into them. This only works on remotes which have a concept of directory (Eg local, google drive, onedrive, amazon drive) and not on bucket based remotes (eg s3, swift, google compute storage, b2).
1038
1039Adding filtering rules
1040
1041Filtering rules are added with the following command line flags.
1042
1043--exclude - Exclude files matching pattern
1044
1045Add a single exclude rule with --exclude.
1046
1047Eg --exclude *.bak to exclude all bak files from the sync.
1048
1049--exclude-from - Read exclude patterns from file
1050
1051Add exclude rules from a file.
1052
1053Prepare a file like this exclude-file.txt
1054
1055# a sample exclude rule file
1056*.bak
1057file2.jpg
1058Then use as --exclude-from exclude-file.txt. This will sync all files except those ending in bak and file2.jpg.
1059
1060This is useful if you have a lot of rules.
1061
1062--include - Include files matching pattern
1063
1064Add a single include rule with --include.
1065
1066Eg --include *.{png,jpg} to include all png and jpg files in the backup and no others.
1067
1068This adds an implicit --exclude * at the very end of the filter list. This means you can mix --include and --include-from with the other filters (eg --exclude) but you must include all the files you want in the include statement. If this doesn't provide enough flexibility then you must use --filter-from.
1069
1070--include-from - Read include patterns from file
1071
1072Add include rules from a file.
1073
1074Prepare a file like this include-file.txt
1075
1076# a sample include rule file
1077*.jpg
1078*.png
1079file2.avi
1080Then use as --include-from include-file.txt. This will sync all jpg, png files and file2.avi.
1081
1082This is useful if you have a lot of rules.
1083
1084This adds an implicit --exclude * at the very end of the filter list. This means you can mix --include and --include-from with the other filters (eg --exclude) but you must include all the files you want in the include statement. If this doesn't provide enough flexibility then you must use --filter-from.
1085
1086--filter - Add a file-filtering rule
1087
1088This can be used to add a single include or exclude rule. Include rules start with + and exclude rules start with -. A special rule called ! can be used to clear the existing rules.
1089
1090Eg --filter "- *.bak" to exclude all bak files from the sync.
1091
1092--filter-from - Read filtering patterns from a file
1093
1094Add include/exclude rules from a file.
1095
1096Prepare a file like this filter-file.txt
1097
1098# a sample exclude rule file
1099- secret*.jpg
1100+ *.jpg
1101+ *.png
1102+ file2.avi
1103# exclude everything else
1104- *
1105Then use as --filter-from filter-file.txt. The rules are processed in the order that they are defined.
1106
1107This example will include all jpg and png files, exclude any files matching secret*.jpg and include file2.avi. Everything else will be excluded from the sync.
1108
1109--files-from - Read list of source-file names
1110
1111This reads a list of file names from the file passed in and only these files are transferred. The filtering rules are ignored completely if you use this option.
1112
1113Prepare a file like this files-from.txt
1114
1115# comment
1116file1.jpg
1117file2.jpg
1118Then use as --files-from files-from.txt. This will only transfer file1.jpg and file2.jpg providing they exist.
1119
1120For example, let's say you had a few files you want to back up regularly with these absolute paths:
1121
1122/home/user1/important
1123/home/user1/dir/file
1124/home/user2/stuff
1125To copy these you'd find a common subdirectory - in this case /home and put the remaining files in files-from.txt with or without leading /, eg
1126
1127user1/important
1128user1/dir/file
1129user2/stuff
1130You could then copy these to a remote like this
1131
1132rclone copy --files-from files-from.txt /home remote:backup
1133The 3 files will arrive in remote:backup with the paths as in the files-from.txt.
1134
1135You could of course choose / as the root too in which case your files-from.txt might look like this.
1136
1137/home/user1/important
1138/home/user1/dir/file
1139/home/user2/stuff
1140And you would transfer it like this
1141
1142rclone copy --files-from files-from.txt / remote:backup
1143In this case there will be an extra home directory on the remote.
1144
1145--min-size - Don't transfer any file smaller than this
1146
1147This option controls the minimum size file which will be transferred. This defaults to kBytes but a suffix of k, M, or G can be used.
1148
1149For example --min-size 50k means no files smaller than 50kByte will be transferred.
1150
1151--max-size - Don't transfer any file larger than this
1152
1153This option controls the maximum size file which will be transferred. This defaults to kBytes but a suffix of k, M, or G can be used.
1154
1155For example --max-size 1G means no files larger than 1GByte will be transferred.
1156
1157--max-age - Don't transfer any file older than this
1158
1159This option controls the maximum age of files to transfer. Give in seconds or with a suffix of:
1160
1161ms - Milliseconds
1162s - Seconds
1163m - Minutes
1164h - Hours
1165d - Days
1166w - Weeks
1167M - Months
1168y - Years
1169For example --max-age 2d means no files older than 2 days will be transferred.
1170
1171--min-age - Don't transfer any file younger than this
1172
1173This option controls the minimum age of files to transfer. Give in seconds or with a suffix (see --max-age for list of suffixes)
1174
1175For example --min-age 2d means no files younger than 2 days will be transferred.
1176
1177--delete-excluded - Delete files on dest excluded from sync
1178
1179Important this flag is dangerous - use with --dry-run and -v first.
1180
1181When doing rclone sync this will delete any files which are excluded from the sync on the destination.
1182
1183If for example you did a sync from A to B without the --min-size 50k flag
1184
1185rclone sync A: B:
1186Then you repeated it like this with the --delete-excluded
1187
1188rclone --min-size 50k --delete-excluded sync A: B:
1189This would delete all files on B which are less than 50 kBytes as these are now excluded from the sync.
1190
1191Always test first with --dry-run and -v before using this flag.
1192
1193--dump-filters - dump the filters to the output
1194
1195This dumps the defined filters to the output as regular expressions.
1196
1197Useful for debugging.
1198
1199Quoting shell metacharacters
1200
1201The examples above may not work verbatim in your shell as they have shell metacharacters in them (eg *), and may require quoting.
1202
1203Eg linux, OSX
1204
1205--include \*.jpg
1206--include '*.jpg'
1207--include='*.jpg'
1208In Windows the expansion is done by the command not the shell so this should work fine
1209
1210--include *.jpg
1211Overview of cloud storage systems
1212
1213Each cloud storage system is slighly different. Rclone attempts to provide a unified interface to them, but some underlying differences show through.
1214
1215Features
1216
1217Here is an overview of the major features of each cloud storage system.
1218
1219Name Hash ModTime Case Insensitive Duplicate Files
1220Google Drive MD5 Yes No Yes
1221Amazon S3 MD5 Yes No No
1222Openstack Swift MD5 Yes No No
1223Dropbox - No Yes No
1224Google Cloud Storage MD5 Yes No No
1225Amazon Drive MD5 No Yes No
1226Microsoft One Drive SHA1 Yes Yes No
1227Hubic MD5 Yes No No
1228Backblaze B2 SHA1 Yes No No
1229Yandex Disk MD5 Yes No No
1230The local filesystem All Yes Depends No
1231Hash
1232
1233The cloud storage system supports various hash types of the objects.
1234The hashes are used when transferring data as an integrity check and can be specifically used with the --checksum flag in syncs and in the check command.
1235
1236To use the checksum checks between filesystems they must support a common hash type.
1237
1238ModTime
1239
1240The cloud storage system supports setting modification times on objects. If it does then this enables a using the modification times as part of the sync. If not then only the size will be checked by default, though the MD5SUM can be checked with the --checksum flag.
1241
1242All cloud storage systems support some kind of date on the object and these will be set when transferring from the cloud storage system.
1243
1244Case Insensitive
1245
1246If a cloud storage systems is case sensitive then it is possible to have two files which differ only in case, eg file.txt and FILE.txt. If a cloud storage system is case insensitive then that isn't possible.
1247
1248This can cause problems when syncing between a case insensitive system and a case sensitive system. The symptom of this is that no matter how many times you run the sync it never completes fully.
1249
1250The local filesystem may or may not be case sensitive depending on OS.
1251
1252Windows - usually case insensitive, though case is preserved
1253OSX - usually case insensitive, though it is possible to format case sensitive
1254Linux - usually case sensitive, but there are case insensitive file systems (eg FAT formatted USB keys)
1255Most of the time this doesn't cause any problems as people tend to avoid files whose name differs only by case even on case sensitive systems.
1256
1257Duplicate files
1258
1259If a cloud storage system allows duplicate files then it can have two objects with the same name.
1260
1261This confuses rclone greatly when syncing - use the rclone dedupe command to rename or remove duplicates.
1262
1263Google Drive
1264
1265Paths are specified as drive:path
1266
1267Drive paths may be as deep as required, eg drive:directory/subdirectory.
1268
1269The initial setup for drive involves getting a token from Google drive which you need to do in your browser. rclone config walks you through it.
1270
1271Here is an example of how to make a remote called remote. First run:
1272
1273 rclone config
1274This will guide you through an interactive setup process:
1275
1276n) New remote
1277d) Delete remote
1278q) Quit config
1279e/n/d/q> n
1280name> remote
1281Type of storage to configure.
1282Choose a number from below, or type in your own value
1283 1 / Amazon Drive
1284 \ "amazon cloud drive"
1285 2 / Amazon S3 (also Dreamhost, Ceph)
1286 \ "s3"
1287 3 / Backblaze B2
1288 \ "b2"
1289 4 / Dropbox
1290 \ "dropbox"
1291 5 / Google Cloud Storage (this is not Google Drive)
1292 \ "google cloud storage"
1293 6 / Google Drive
1294 \ "drive"
1295 7 / Hubic
1296 \ "hubic"
1297 8 / Local Disk
1298 \ "local"
1299 9 / Microsoft OneDrive
1300 \ "onedrive"
130110 / Openstack Swift (Rackspace Cloud Files, Memset Memstore, OVH)
1302 \ "swift"
130311 / Yandex Disk
1304 \ "yandex"
1305Storage> 6
1306Google Application Client Id - leave blank normally.
1307client_id>
1308Google Application Client Secret - leave blank normally.
1309client_secret>
1310Remote config
1311Use auto config?
1312 * Say Y if not sure
1313 * Say N if you are working on a remote or headless machine or Y didn't work
1314y) Yes
1315n) No
1316y/n> y
1317If your browser doesn't open automatically go to the following link: http://127.0.0.1:53682/auth
1318Log in and authorize rclone for access
1319Waiting for code...
1320Got code
1321--------------------
1322[remote]
1323client_id =
1324client_secret =
1325token = {"AccessToken":"xxxx.x.xxxxx_xxxxxxxxxxx_xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx","RefreshToken":"1/xxxxxxxxxxxxxxxx_xxxxxxxxxxxxxxxxxxxxxxxxxx","Expiry":"2014-03-16T13:57:58.955387075Z","Extra":null}
1326--------------------
1327y) Yes this is OK
1328e) Edit this remote
1329d) Delete this remote
1330y/e/d> y
1331Note that rclone runs a webserver on your local machine to collect the token as returned from Google if you use auto config mode. This only runs from the moment it opens your browser to the moment you get back the verification code. This is on http://127.0.0.1:53682/ and this it may require you to unblock it temporarily if you are running a host firewall, or use manual mode.
1332
1333You can then use it like this,
1334
1335List directories in top level of your drive
1336
1337rclone lsd remote:
1338List all the files in your drive
1339
1340rclone ls remote:
1341To copy a local directory to a drive directory called backup
1342
1343rclone copy /home/source remote:backup
1344Modified time
1345
1346Google drive stores modification times accurate to 1 ms.
1347
1348Revisions
1349
1350Google drive stores revisions of files. When you upload a change to an existing file to google drive using rclone it will create a new revision of that file.
1351
1352Revisions follow the standard google policy which at time of writing was
1353
1354They are deleted after 30 days or 100 revisions (whatever comes first).
1355They do not count towards a user storage quota.
1356Deleting files
1357
1358By default rclone will delete files permanently when requested. If sending them to the trash is required instead then use the --drive-use-trash flag.
1359
1360Specific options
1361
1362Here are the command line options specific to this cloud storage system.
1363
1364--drive-chunk-size=SIZE
1365
1366Upload chunk size. Must a power of 2 >= 256k. Default value is 8 MB.
1367
1368Making this larger will improve performance, but note that each chunk is buffered in memory one per transfer.
1369
1370Reducing this will reduce memory usage but decrease performance.
1371
1372--drive-full-list
1373
1374No longer does anything - kept for backwards compatibility.
1375
1376--drive-upload-cutoff=SIZE
1377
1378File size cutoff for switching to chunked upload. Default is 8 MB.
1379
1380--drive-use-trash
1381
1382Send files to the trash instead of deleting permanently. Defaults to off, namely deleting files permanently.
1383
1384--drive-auth-owner-only
1385
1386Only consider files owned by the authenticated user. Requires that --drive-full-list=true (default).
1387
1388--drive-formats
1389
1390Google documents can only be exported from Google drive. When rclone downloads a Google doc it chooses a format to download depending upon this setting.
1391
1392By default the formats are docx,xlsx,pptx,svg which are a sensible default for an editable document.
1393
1394When choosing a format, rclone runs down the list provided in order and chooses the first file format the doc can be exported as from the list. If the file can't be exported to a format on the formats list, then rclone will choose a format from the default list.
1395
1396If you prefer an archive copy then you might use --drive-formats pdf, or if you prefer openoffice/libreoffice formats you might use --drive-formats ods,odt.
1397
1398Note that rclone adds the extension to the google doc, so if it is calles My Spreadsheet on google docs, it will be exported as My Spreadsheet.xlsx or My Spreadsheet.pdf etc.
1399
1400Here are the possible extensions with their corresponding mime types.
1401
1402Extension Mime Type Description
1403csv text/csv Standard CSV format for Spreadsheets
1404doc application/msword Micosoft Office Document
1405docx application/vnd.openxmlformats-officedocument.wordprocessingml.document Microsoft Office Document
1406html text/html An HTML Document
1407jpg image/jpeg A JPEG Image File
1408ods application/vnd.oasis.opendocument.spreadsheet Openoffice Spreadsheet
1409ods application/x-vnd.oasis.opendocument.spreadsheet Openoffice Spreadsheet
1410odt application/vnd.oasis.opendocument.text Openoffice Document
1411pdf application/pdf Adobe PDF Format
1412png image/png PNG Image Format
1413pptx application/vnd.openxmlformats-officedocument.presentationml.presentation Microsoft Office Powerpoint
1414rtf application/rtf Rich Text Format
1415svg image/svg+xml Scalable Vector Graphics Format
1416txt text/plain Plain Text
1417xls application/vnd.ms-excel Microsoft Office Spreadsheet
1418xlsx application/vnd.openxmlformats-officedocument.spreadsheetml.sheet Microsoft Office Spreadsheet
1419zip application/zip A ZIP file of HTML, Images CSS
1420Limitations
1421
1422Drive has quite a lot of rate limiting. This causes rclone to be limited to transferring about 2 files per second only. Individual files may be transferred much faster at 100s of MBytes/s but lots of small files can take a long time.
1423
1424Making your own client_id
1425
1426When you use rclone with Google drive in its default configuration you are using rclone's client_id. This is shared between all the rclone users. There is a global rate limit on the number of queries per second that each client_id can do set by Google. rclone already has a high quota and I will continue to make sure it is high enough by contacting Google.
1427
1428However you might find you get better performance making your own client_id if you are a heavy user. Or you may not depending on exactly how Google have been raising rclone's rate limit.
1429
1430Here is how to create your own Google Drive client ID for rclone:
1431
1432Log into the Google API Console with your Google account. It doesn't matter what Google account you use. (It need not be the same account as the Google Drive you want to access)
1433
1434Select a project or create a new project.
1435
1436Under Overview, Google APIs, Google Apps APIs, click "Drive API", then "Enable".
1437
1438Click "Credentials" in the left-side panel (not "Go to credentials", which opens the wizard), then "Create credentials", then "OAuth client ID". It will prompt you to set the OAuth consent screen product name, if you haven't set one already.
1439
1440Choose an application type of "other", and click "Create". (the default name is fine)
1441
1442It will show you a client ID and client secret. Use these values in rclone config to add a new remote or edit an existing remote.
1443
1444(Thanks to @balazer on github for these instructions.)
1445
1446Amazon S3
1447
1448Paths are specified as remote:bucket (or remote: for the lsd command.) You may put subdirectories in too, eg remote:bucket/path/to/dir.
1449
1450Here is an example of making an s3 configuration. First run
1451
1452rclone config
1453This will guide you through an interactive setup process.
1454
1455No remotes found - make a new one
1456n) New remote
1457s) Set configuration password
1458n/s> n
1459name> remote
1460Type of storage to configure.
1461Choose a number from below, or type in your own value
1462 1 / Amazon Drive
1463 \ "amazon cloud drive"
1464 2 / Amazon S3 (also Dreamhost, Ceph)
1465 \ "s3"
1466 3 / Backblaze B2
1467 \ "b2"
1468 4 / Dropbox
1469 \ "dropbox"
1470 5 / Google Cloud Storage (this is not Google Drive)
1471 \ "google cloud storage"
1472 6 / Google Drive
1473 \ "drive"
1474 7 / Hubic
1475 \ "hubic"
1476 8 / Local Disk
1477 \ "local"
1478 9 / Microsoft OneDrive
1479 \ "onedrive"
148010 / Openstack Swift (Rackspace Cloud Files, Memset Memstore, OVH)
1481 \ "swift"
148211 / Yandex Disk
1483 \ "yandex"
1484Storage> 2
1485Get AWS credentials from runtime (environment variables or EC2 meta data if no env vars). Only applies if access_key_id and secret_access_key is blank.
1486Choose a number from below, or type in your own value
1487 1 / Enter AWS credentials in the next step
1488 \ "false"
1489 2 / Get AWS credentials from the environment (env vars or IAM)
1490 \ "true"
1491env_auth> 1
1492AWS Access Key ID - leave blank for anonymous access or runtime credentials.
1493access_key_id> access_key
1494AWS Secret Access Key (password) - leave blank for anonymous access or runtime credentials.
1495secret_access_key> secret_key
1496Region to connect to.
1497Choose a number from below, or type in your own value
1498 / The default endpoint - a good choice if you are unsure.
1499 1 | US Region, Northern Virginia or Pacific Northwest.
1500 | Leave location constraint empty.
1501 \ "us-east-1"
1502 / US West (Oregon) Region
1503 2 | Needs location constraint us-west-2.
1504 \ "us-west-2"
1505 / US West (Northern California) Region
1506 3 | Needs location constraint us-west-1.
1507 \ "us-west-1"
1508 / EU (Ireland) Region Region
1509 4 | Needs location constraint EU or eu-west-1.
1510 \ "eu-west-1"
1511 / EU (Frankfurt) Region
1512 5 | Needs location constraint eu-central-1.
1513 \ "eu-central-1"
1514 / Asia Pacific (Singapore) Region
1515 6 | Needs location constraint ap-southeast-1.
1516 \ "ap-southeast-1"
1517 / Asia Pacific (Sydney) Region
1518 7 | Needs location constraint ap-southeast-2.
1519 \ "ap-southeast-2"
1520 / Asia Pacific (Tokyo) Region
1521 8 | Needs location constraint ap-northeast-1.
1522 \ "ap-northeast-1"
1523 / South America (Sao Paulo) Region
1524 9 | Needs location constraint sa-east-1.
1525 \ "sa-east-1"
1526 / If using an S3 clone that only understands v2 signatures
152710 | eg Ceph/Dreamhost
1528 | set this and make sure you set the endpoint.
1529 \ "other-v2-signature"
1530 / If using an S3 clone that understands v4 signatures set this
153111 | and make sure you set the endpoint.
1532 \ "other-v4-signature"
1533region> 1
1534Endpoint for S3 API.
1535Leave blank if using AWS to use the default endpoint for the region.
1536Specify if using an S3 clone such as Ceph.
1537endpoint>
1538Location constraint - must be set to match the Region. Used when creating buckets only.
1539Choose a number from below, or type in your own value
1540 1 / Empty for US Region, Northern Virginia or Pacific Northwest.
1541 \ ""
1542 2 / US West (Oregon) Region.
1543 \ "us-west-2"
1544 3 / US West (Northern California) Region.
1545 \ "us-west-1"
1546 4 / EU (Ireland) Region.
1547 \ "eu-west-1"
1548 5 / EU Region.
1549 \ "EU"
1550 6 / Asia Pacific (Singapore) Region.
1551 \ "ap-southeast-1"
1552 7 / Asia Pacific (Sydney) Region.
1553 \ "ap-southeast-2"
1554 8 / Asia Pacific (Tokyo) Region.
1555 \ "ap-northeast-1"
1556 9 / South America (Sao Paulo) Region.
1557 \ "sa-east-1"
1558location_constraint> 1
1559Canned ACL used when creating buckets and/or storing objects in S3.
1560For more info visit http://docs.aws.amazon.com/AmazonS3/latest/dev/acl-overview.html#canned-acl
1561Choose a number from below, or type in your own value
1562 1 / Owner gets FULL_CONTROL. No one else has access rights (default).
1563 \ "private"
1564 2 / Owner gets FULL_CONTROL. The AllUsers group gets READ access.
1565 \ "public-read"
1566 / Owner gets FULL_CONTROL. The AllUsers group gets READ and WRITE access.
1567 3 | Granting this on a bucket is generally not recommended.
1568 \ "public-read-write"
1569 4 / Owner gets FULL_CONTROL. The AuthenticatedUsers group gets READ access.
1570 \ "authenticated-read"
1571 / Object owner gets FULL_CONTROL. Bucket owner gets READ access.
1572 5 | If you specify this canned ACL when creating a bucket, Amazon S3 ignores it.
1573 \ "bucket-owner-read"
1574 / Both the object owner and the bucket owner get FULL_CONTROL over the object.
1575 6 | If you specify this canned ACL when creating a bucket, Amazon S3 ignores it.
1576 \ "bucket-owner-full-control"
1577acl> private
1578The server-side encryption algorithm used when storing this object in S3.
1579Choose a number from below, or type in your own value
1580 1 / None
1581 \ ""
1582 2 / AES256
1583 \ "AES256"
1584server_side_encryption>
1585Remote config
1586--------------------
1587[remote]
1588env_auth = false
1589access_key_id = access_key
1590secret_access_key = secret_key
1591region = us-east-1
1592endpoint =
1593location_constraint =
1594--------------------
1595y) Yes this is OK
1596e) Edit this remote
1597d) Delete this remote
1598y/e/d> y
1599This remote is called remote and can now be used like this
1600
1601See all buckets
1602
1603rclone lsd remote:
1604Make a new bucket
1605
1606rclone mkdir remote:bucket
1607List the contents of a bucket
1608
1609rclone ls remote:bucket
1610Sync /home/local/directory to the remote bucket, deleting any excess files in the bucket.
1611
1612rclone sync /home/local/directory remote:bucket
1613Modified time
1614
1615The modified time is stored as metadata on the object as X-Amz-Meta-Mtime as floating point since the epoch accurate to 1 ns.
1616
1617Multipart uploads
1618
1619rclone supports multipart uploads with S3 which means that it can upload files bigger than 5GB. Note that files uploaded with multipart upload don't have an MD5SUM.
1620
1621Buckets and Regions
1622
1623With Amazon S3 you can list buckets (rclone lsd) using any region, but you can only access the content of a bucket from the region it was created in. If you attempt to access a bucket from the wrong region, you will get an error, incorrect region, the bucket is not in 'XXX' region.
1624
1625Authentication
1626
1627There are two ways to supply rclone with a set of AWS credentials. In order of precedence:
1628
1629Directly in the rclone configuration file (as configured by rclone config)
1630set access_key_id and secret_access_key
1631Runtime configuration:
1632set env_auth to true in the config file
1633Exporting the following environment variables before running rclone
1634Access Key ID: AWS_ACCESS_KEY_ID or AWS_ACCESS_KEY
1635Secret Access Key: AWS_SECRET_ACCESS_KEY or AWS_SECRET_KEY
1636Running rclone on an EC2 instance with an IAM role
1637If none of these option actually end up providing rclone with AWS credentials then S3 interaction will be non-authenticated (see below).
1638
1639Anonymous access to public buckets
1640
1641If you want to use rclone to access a public bucket, configure with a blank access_key_id and secret_access_key. Eg
1642
1643No remotes found - make a new one
1644n) New remote
1645q) Quit config
1646n/q> n
1647name> anons3
1648What type of source is it?
1649Choose a number from below
1650 1) amazon cloud drive
1651 2) b2
1652 3) drive
1653 4) dropbox
1654 5) google cloud storage
1655 6) swift
1656 7) hubic
1657 8) local
1658 9) onedrive
165910) s3
166011) yandex
1661type> 10
1662Get AWS credentials from runtime (environment variables or EC2 meta data if no env vars). Only applies if access_key_id and secret_access_key is blank.
1663Choose a number from below, or type in your own value
1664 * Enter AWS credentials in the next step
1665 1) false
1666 * Get AWS credentials from the environment (env vars or IAM)
1667 2) true
1668env_auth> 1
1669AWS Access Key ID - leave blank for anonymous access or runtime credentials.
1670access_key_id>
1671AWS Secret Access Key (password) - leave blank for anonymous access or runtime credentials.
1672secret_access_key>
1673...
1674Then use it as normal with the name of the public bucket, eg
1675
1676rclone lsd anons3:1000genomes
1677You will be able to list and copy data but not upload it.
1678
1679Ceph
1680
1681Ceph is an object storage system which presents an Amazon S3 interface.
1682
1683To use rclone with ceph, you need to set the following parameters in the config.
1684
1685access_key_id = Whatever
1686secret_access_key = Whatever
1687endpoint = https://ceph.endpoint.goes.here/
1688region = other-v2-signature
1689Note also that Ceph sometimes puts / in the passwords it gives users. If you read the secret access key using the command line tools you will get a JSON blob with the / escaped as \/. Make sure you only write / in the secret access key.
1690
1691Eg the dump from Ceph looks something like this (irrelevant keys removed).
1692
1693{
1694 "user_id": "xxx",
1695 "display_name": "xxxx",
1696 "keys": [
1697 {
1698 "user": "xxx",
1699 "access_key": "xxxxxx",
1700 "secret_key": "xxxxxx\/xxxx"
1701 }
1702 ],
1703}
1704Because this is a json dump, it is encoding the / as \/, so if you use the secret key as xxxxxx/xxxx it will work fine.
1705
1706Minio
1707
1708Minio is an object storage server built for cloud application developers and devops.
1709
1710It is very easy to install and provides an S3 compatible server which can be used by rclone.
1711
1712To use it, install Minio following the instructions from the web site.
1713
1714When it configures itself Minio will print something like this
1715
1716AccessKey: WLGDGYAQYIGI833EV05A SecretKey: BYvgJM101sHngl2uzjXS/OBF/aMxAN06JrJ3qJlF Region: us-east-1
1717
1718Minio Object Storage:
1719 http://127.0.0.1:9000
1720 http://10.0.0.3:9000
1721
1722Minio Browser:
1723 http://127.0.0.1:9000
1724 http://10.0.0.3:9000
1725These details need to go into rclone config like this. Note that it is important to put the region in as stated above.
1726
1727env_auth> 1
1728access_key_id> WLGDGYAQYIGI833EV05A
1729secret_access_key> BYvgJM101sHngl2uzjXS/OBF/aMxAN06JrJ3qJlF
1730region> us-east-1
1731endpoint> http://10.0.0.3:9000
1732location_constraint>
1733server_side_encryption>
1734Which makes the config file look like this
1735
1736[minio]
1737env_auth = false
1738access_key_id = WLGDGYAQYIGI833EV05A
1739secret_access_key = BYvgJM101sHngl2uzjXS/OBF/aMxAN06JrJ3qJlF
1740region = us-east-1
1741endpoint = http://10.0.0.3:9000
1742location_constraint =
1743server_side_encryption =
1744Minio doesn't support all the features of S3 yet. In particular it doesn't support MD5 checksums (ETags) or metadata. This means rclone can't check MD5SUMs or store the modified date. However you can work around this with the --size-only flag of rclone.
1745
1746So once set up, for example to copy files into a bucket
1747
1748rclone --size-only copy /path/to/files minio:bucket
1749Swift
1750
1751Swift refers to Openstack Object Storage. Commercial implementations of that being:
1752
1753Rackspace Cloud Files
1754Memset Memstore
1755Paths are specified as remote:container (or remote: for the lsd command.) You may put subdirectories in too, eg remote:container/path/to/dir.
1756
1757Here is an example of making a swift configuration. First run
1758
1759rclone config
1760This will guide you through an interactive setup process.
1761
1762No remotes found - make a new one
1763n) New remote
1764s) Set configuration password
1765n/s> n
1766name> remote
1767Type of storage to configure.
1768Choose a number from below, or type in your own value
1769 1 / Amazon Drive
1770 \ "amazon cloud drive"
1771 2 / Amazon S3 (also Dreamhost, Ceph)
1772 \ "s3"
1773 3 / Backblaze B2
1774 \ "b2"
1775 4 / Dropbox
1776 \ "dropbox"
1777 5 / Google Cloud Storage (this is not Google Drive)
1778 \ "google cloud storage"
1779 6 / Google Drive
1780 \ "drive"
1781 7 / Hubic
1782 \ "hubic"
1783 8 / Local Disk
1784 \ "local"
1785 9 / Microsoft OneDrive
1786 \ "onedrive"
178710 / Openstack Swift (Rackspace Cloud Files, Memset Memstore, OVH)
1788 \ "swift"
178911 / Yandex Disk
1790 \ "yandex"
1791Storage> 10
1792User name to log in.
1793user> user_name
1794API key or password.
1795key> password_or_api_key
1796Authentication URL for server.
1797Choose a number from below, or type in your own value
1798 1 / Rackspace US
1799 \ "https://auth.api.rackspacecloud.com/v1.0"
1800 2 / Rackspace UK
1801 \ "https://lon.auth.api.rackspacecloud.com/v1.0"
1802 3 / Rackspace v2
1803 \ "https://identity.api.rackspacecloud.com/v2.0"
1804 4 / Memset Memstore UK
1805 \ "https://auth.storage.memset.com/v1.0"
1806 5 / Memset Memstore UK v2
1807 \ "https://auth.storage.memset.com/v2.0"
1808 6 / OVH
1809 \ "https://auth.cloud.ovh.net/v2.0"
1810auth> 1
1811User domain - optional (v3 auth)
1812domain> Default
1813Tenant name - optional
1814tenant>
1815Tenant domain - optional (v3 auth)
1816tenant_domain>
1817Region name - optional
1818region>
1819Storage URL - optional
1820storage_url>
1821Remote config
1822AuthVersion - optional - set to (1,2,3) if your auth URL has no version
1823auth_version>
1824--------------------
1825[remote]
1826user = user_name
1827key = password_or_api_key
1828auth = https://auth.api.rackspacecloud.com/v1.0
1829tenant =
1830region =
1831storage_url =
1832--------------------
1833y) Yes this is OK
1834e) Edit this remote
1835d) Delete this remote
1836y/e/d> y
1837This remote is called remote and can now be used like this
1838
1839See all containers
1840
1841rclone lsd remote:
1842Make a new container
1843
1844rclone mkdir remote:container
1845List the contents of a container
1846
1847rclone ls remote:container
1848Sync /home/local/directory to the remote container, deleting any excess files in the container.
1849
1850rclone sync /home/local/directory remote:container
1851Specific options
1852
1853Here are the command line options specific to this cloud storage system.
1854
1855--swift-chunk-size=SIZE
1856
1857Above this size files will be chunked into a _segments container. The default for this is 5GB which is its maximum value.
1858
1859Modified time
1860
1861The modified time is stored as metadata on the object as X-Object-Meta-Mtime as floating point since the epoch accurate to 1 ns.
1862
1863This is a defacto standard (used in the official python-swiftclient amongst others) for storing the modification time for an object.
1864
1865Limitations
1866
1867The Swift API doesn't return a correct MD5SUM for segmented files (Dynamic or Static Large Objects) so rclone won't check or use the MD5SUM for these.
1868
1869Troubleshooting
1870
1871Rclone gives Failed to create file system for "remote:": Bad Request
1872
1873Due to an oddity of the underlying swift library, it gives a "Bad Request" error rather than a more sensible error when the authentication fails for Swift.
1874
1875So this most likely means your username / password is wrong. You can investigate further with the --dump-bodies flag.
1876
1877Rclone gives Failed to create file system: Response didn't have storage storage url and auth token
1878
1879This is most likely caused by forgetting to specify your tenant when setting up a swift remote.
1880
1881Dropbox
1882
1883Paths are specified as remote:path
1884
1885Dropbox paths may be as deep as required, eg remote:directory/subdirectory.
1886
1887The initial setup for dropbox involves getting a token from Dropbox which you need to do in your browser. rclone config walks you through it.
1888
1889Here is an example of how to make a remote called remote. First run:
1890
1891 rclone config
1892This will guide you through an interactive setup process:
1893
1894n) New remote
1895d) Delete remote
1896q) Quit config
1897e/n/d/q> n
1898name> remote
1899Type of storage to configure.
1900Choose a number from below, or type in your own value
1901 1 / Amazon Drive
1902 \ "amazon cloud drive"
1903 2 / Amazon S3 (also Dreamhost, Ceph)
1904 \ "s3"
1905 3 / Backblaze B2
1906 \ "b2"
1907 4 / Dropbox
1908 \ "dropbox"
1909 5 / Google Cloud Storage (this is not Google Drive)
1910 \ "google cloud storage"
1911 6 / Google Drive
1912 \ "drive"
1913 7 / Hubic
1914 \ "hubic"
1915 8 / Local Disk
1916 \ "local"
1917 9 / Microsoft OneDrive
1918 \ "onedrive"
191910 / Openstack Swift (Rackspace Cloud Files, Memset Memstore, OVH)
1920 \ "swift"
192111 / Yandex Disk
1922 \ "yandex"
1923Storage> 4
1924Dropbox App Key - leave blank normally.
1925app_key>
1926Dropbox App Secret - leave blank normally.
1927app_secret>
1928Remote config
1929Please visit:
1930https://www.dropbox.com/1/oauth2/authorize?client_id=XXXXXXXXXXXXXXX&response_type=code
1931Enter the code: XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX_XXXXXXXXXX
1932--------------------
1933[remote]
1934app_key =
1935app_secret =
1936token = XXXXXXXXXXXXXXXXXXXXXXXXXXXXX_XXXX_XXXXXXXXXXXXXXXXXXXXXXXXXXXXX
1937--------------------
1938y) Yes this is OK
1939e) Edit this remote
1940d) Delete this remote
1941y/e/d> y
1942You can then use it like this,
1943
1944List directories in top level of your dropbox
1945
1946rclone lsd remote:
1947List all the files in your dropbox
1948
1949rclone ls remote:
1950To copy a local directory to a dropbox directory called backup
1951
1952rclone copy /home/source remote:backup
1953Modified time and MD5SUMs
1954
1955Dropbox doesn't provide the ability to set modification times in the V1 public API, so rclone can't support modified time with Dropbox.
1956
1957This may change in the future - see these issues for details:
1958
1959Dropbox V2 API
1960Allow syncs for remotes that can't set modtime on existing objects
1961Dropbox doesn't return any sort of checksum (MD5 or SHA1).
1962
1963Together that means that syncs to dropbox will effectively have the --size-only flag set.
1964
1965Specific options
1966
1967Here are the command line options specific to this cloud storage system.
1968
1969--dropbox-chunk-size=SIZE
1970
1971Upload chunk size. Max 150M. The default is 128MB. Note that this isn't buffered into memory.
1972
1973Limitations
1974
1975Note that Dropbox is case insensitive so you can't have a file called "Hello.doc" and one called "hello.doc".
1976
1977There are some file names such as thumbs.db which Dropbox can't store. There is a full list of them in the "Ignored Files" section of this document. Rclone will issue an error message File name disallowed - not uploading if it attempt to upload one of those file names, but the sync won't fail.
1978
1979If you have more than 10,000 files in a directory then rclone purge dropbox:dir will return the error Failed to purge: There are too many files involved in this operation. As a work-around do an rclone delete dropbix:dir followed by an rclone rmdir dropbox:dir.
1980
1981Google Cloud Storage
1982
1983Paths are specified as remote:bucket (or remote: for the lsd command.) You may put subdirectories in too, eg remote:bucket/path/to/dir.
1984
1985The initial setup for google cloud storage involves getting a token from Google Cloud Storage which you need to do in your browser. rclone config walks you through it.
1986
1987Here is an example of how to make a remote called remote. First run:
1988
1989 rclone config
1990This will guide you through an interactive setup process:
1991
1992n) New remote
1993d) Delete remote
1994q) Quit config
1995e/n/d/q> n
1996name> remote
1997Type of storage to configure.
1998Choose a number from below, or type in your own value
1999 1 / Amazon Drive
2000 \ "amazon cloud drive"
2001 2 / Amazon S3 (also Dreamhost, Ceph)
2002 \ "s3"
2003 3 / Backblaze B2
2004 \ "b2"
2005 4 / Dropbox
2006 \ "dropbox"
2007 5 / Google Cloud Storage (this is not Google Drive)
2008 \ "google cloud storage"
2009 6 / Google Drive
2010 \ "drive"
2011 7 / Hubic
2012 \ "hubic"
2013 8 / Local Disk
2014 \ "local"
2015 9 / Microsoft OneDrive
2016 \ "onedrive"
201710 / Openstack Swift (Rackspace Cloud Files, Memset Memstore, OVH)
2018 \ "swift"
201911 / Yandex Disk
2020 \ "yandex"
2021Storage> 5
2022Google Application Client Id - leave blank normally.
2023client_id>
2024Google Application Client Secret - leave blank normally.
2025client_secret>
2026Project number optional - needed only for list/create/delete buckets - see your developer console.
2027project_number> 12345678
2028Service Account Credentials JSON file path - needed only if you want use SA instead of interactive login.
2029service_account_file>
2030Access Control List for new objects.
2031Choose a number from below, or type in your own value
2032 * Object owner gets OWNER access, and all Authenticated Users get READER access.
2033 1) authenticatedRead
2034 * Object owner gets OWNER access, and project team owners get OWNER access.
2035 2) bucketOwnerFullControl
2036 * Object owner gets OWNER access, and project team owners get READER access.
2037 3) bucketOwnerRead
2038 * Object owner gets OWNER access [default if left blank].
2039 4) private
2040 * Object owner gets OWNER access, and project team members get access according to their roles.
2041 5) projectPrivate
2042 * Object owner gets OWNER access, and all Users get READER access.
2043 6) publicRead
2044object_acl> 4
2045Access Control List for new buckets.
2046Choose a number from below, or type in your own value
2047 * Project team owners get OWNER access, and all Authenticated Users get READER access.
2048 1) authenticatedRead
2049 * Project team owners get OWNER access [default if left blank].
2050 2) private
2051 * Project team members get access according to their roles.
2052 3) projectPrivate
2053 * Project team owners get OWNER access, and all Users get READER access.
2054 4) publicRead
2055 * Project team owners get OWNER access, and all Users get WRITER access.
2056 5) publicReadWrite
2057bucket_acl> 2
2058Remote config
2059Remote config
2060Use auto config?
2061 * Say Y if not sure
2062 * Say N if you are working on a remote or headless machine or Y didn't work
2063y) Yes
2064n) No
2065y/n> y
2066If your browser doesn't open automatically go to the following link: http://127.0.0.1:53682/auth
2067Log in and authorize rclone for access
2068Waiting for code...
2069Got code
2070--------------------
2071[remote]
2072type = google cloud storage
2073client_id =
2074client_secret =
2075token = {"AccessToken":"xxxx.xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx-xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx","RefreshToken":"x/xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx_xxxxxxxxx","Expiry":"2014-07-17T20:49:14.929208288+01:00","Extra":null}
2076project_number = 12345678
2077object_acl = private
2078bucket_acl = private
2079--------------------
2080y) Yes this is OK
2081e) Edit this remote
2082d) Delete this remote
2083y/e/d> y
2084Note that rclone runs a webserver on your local machine to collect the token as returned from Google if you use auto config mode. This only runs from the moment it opens your browser to the moment you get back the verification code. This is on http://127.0.0.1:53682/ and this it may require you to unblock it temporarily if you are running a host firewall, or use manual mode.
2085
2086This remote is called remote and can now be used like this
2087
2088See all the buckets in your project
2089
2090rclone lsd remote:
2091Make a new bucket
2092
2093rclone mkdir remote:bucket
2094List the contents of a bucket
2095
2096rclone ls remote:bucket
2097Sync /home/local/directory to the remote bucket, deleting any excess files in the bucket.
2098
2099rclone sync /home/local/directory remote:bucket
2100Service Account support
2101
2102You can set up rclone with Google Cloud Storage in an unattended mode, i.e. not tied to a specific end-user Google account. This is useful when you want to synchronise files onto machines that don't have actively logged-in users, for example build machines.
2103
2104To get credentials for Google Cloud Platform IAM Service Accounts, please head to the Service Account section of the Google Developer Console. Service Accounts behave just like normal User permissions in Google Cloud Storage ACLs, so you can limit their access (e.g. make them read only). After creating an account, a JSON file containing the Service Account's credentials will be downloaded onto your machines. These credentials are what rclone will use for authentication.
2105
2106To use a Service Account instead of OAuth2 token flow, enter the path to your Service Account credentials at the service_account_file prompt and rclone won't use the browser based authentication flow.
2107
2108Modified time
2109
2110Google google cloud storage stores md5sums natively and rclone stores modification times as metadata on the object, under the "mtime" key in RFC3339 format accurate to 1ns.
2111
2112Amazon Drive
2113
2114Paths are specified as remote:path
2115
2116Paths may be as deep as required, eg remote:directory/subdirectory.
2117
2118The initial setup for Amazon Drive involves getting a token from Amazon which you need to do in your browser. rclone config walks you through it.
2119
2120Here is an example of how to make a remote called remote. First run:
2121
2122 rclone config
2123This will guide you through an interactive setup process:
2124
2125n) New remote
2126d) Delete remote
2127q) Quit config
2128e/n/d/q> n
2129name> remote
2130Type of storage to configure.
2131Choose a number from below, or type in your own value
2132 1 / Amazon Drive
2133 \ "amazon cloud drive"
2134 2 / Amazon S3 (also Dreamhost, Ceph)
2135 \ "s3"
2136 3 / Backblaze B2
2137 \ "b2"
2138 4 / Dropbox
2139 \ "dropbox"
2140 5 / Google Cloud Storage (this is not Google Drive)
2141 \ "google cloud storage"
2142 6 / Google Drive
2143 \ "drive"
2144 7 / Hubic
2145 \ "hubic"
2146 8 / Local Disk
2147 \ "local"
2148 9 / Microsoft OneDrive
2149 \ "onedrive"
215010 / Openstack Swift (Rackspace Cloud Files, Memset Memstore, OVH)
2151 \ "swift"
215211 / Yandex Disk
2153 \ "yandex"
2154Storage> 1
2155Amazon Application Client Id - leave blank normally.
2156client_id>
2157Amazon Application Client Secret - leave blank normally.
2158client_secret>
2159Remote config
2160If your browser doesn't open automatically go to the following link: http://127.0.0.1:53682/auth
2161Log in and authorize rclone for access
2162Waiting for code...
2163Got code
2164--------------------
2165[remote]
2166client_id =
2167client_secret =
2168token = {"access_token":"xxxxxxxxxxxxxxxxxxxxxxx","token_type":"bearer","refresh_token":"xxxxxxxxxxxxxxxxxx","expiry":"2015-09-06T16:07:39.658438471+01:00"}
2169--------------------
2170y) Yes this is OK
2171e) Edit this remote
2172d) Delete this remote
2173y/e/d> y
2174See the remote setup docs for how to set it up on a machine with no Internet browser available.
2175
2176Note that rclone runs a webserver on your local machine to collect the token as returned from Amazon. This only runs from the moment it opens your browser to the moment you get back the verification code. This is on http://127.0.0.1:53682/ and this it may require you to unblock it temporarily if you are running a host firewall.
2177
2178Once configured you can then use rclone like this,
2179
2180List directories in top level of your Amazon Drive
2181
2182rclone lsd remote:
2183List all the files in your Amazon Drive
2184
2185rclone ls remote:
2186To copy a local directory to an Amazon Drive directory called backup
2187
2188rclone copy /home/source remote:backup
2189Modified time and MD5SUMs
2190
2191Amazon Drive doesn't allow modification times to be changed via the API so these won't be accurate or used for syncing.
2192
2193It does store MD5SUMs so for a more accurate sync, you can use the --checksum flag.
2194
2195Deleting files
2196
2197Any files you delete with rclone will end up in the trash. Amazon don't provide an API to permanently delete files, nor to empty the trash, so you will have to do that with one of Amazon's apps or via the Amazon Drive website.
2198
2199Specific options
2200
2201Here are the command line options specific to this cloud storage system.
2202
2203--acd-templink-threshold=SIZE
2204
2205Files this size or more will be downloaded via their tempLink. This is to work around a problem with Amazon Drive which blocks downloads of files bigger than about 10GB. The default for this is 9GB which shouldn't need to be changed.
2206
2207To download files above this threshold, rclone requests a tempLink which downloads the file through a temporary URL directly from the underlying S3 storage.
2208
2209--acd-upload-wait-time=TIME
2210
2211Sometimes Amazon Drive gives an error when a file has been fully uploaded but the file appears anyway after a little while. This controls the time rclone waits - 2 minutes by default. You might want to increase the time if you are having problems with very big files. Upload with the -v flag for more info.
2212
2213Limitations
2214
2215Note that Amazon Drive is case insensitive so you can't have a file called "Hello.doc" and one called "hello.doc".
2216
2217Amazon Drive has rate limiting so you may notice errors in the sync (429 errors). rclone will automatically retry the sync up to 3 times by default (see --retries flag) which should hopefully work around this problem.
2218
2219Amazon Drive has an internal limit of file sizes that can be uploaded to the service. This limit is not officially published, but all files larger than this will fail.
2220
2221At the time of writing (Jan 2016) is in the area of 50GB per file. This means that larger files are likely to fail.
2222
2223Unfortunatly there is no way for rclone to see that this failure is because of file size, so it will retry the operation, as any other failure. To avoid this problem, use --max-size=50GB option to limit the maximum size of uploaded files.
2224
2225Microsoft One Drive
2226
2227Paths are specified as remote:path
2228
2229Paths may be as deep as required, eg remote:directory/subdirectory.
2230
2231The initial setup for One Drive involves getting a token from Microsoft which you need to do in your browser. rclone config walks you through it.
2232
2233Here is an example of how to make a remote called remote. First run:
2234
2235 rclone config
2236This will guide you through an interactive setup process:
2237
2238No remotes found - make a new one
2239n) New remote
2240s) Set configuration password
2241n/s> n
2242name> remote
2243Type of storage to configure.
2244Choose a number from below, or type in your own value
2245 1 / Amazon Drive
2246 \ "amazon cloud drive"
2247 2 / Amazon S3 (also Dreamhost, Ceph)
2248 \ "s3"
2249 3 / Backblaze B2
2250 \ "b2"
2251 4 / Dropbox
2252 \ "dropbox"
2253 5 / Google Cloud Storage (this is not Google Drive)
2254 \ "google cloud storage"
2255 6 / Google Drive
2256 \ "drive"
2257 7 / Hubic
2258 \ "hubic"
2259 8 / Local Disk
2260 \ "local"
2261 9 / Microsoft OneDrive
2262 \ "onedrive"
226310 / Openstack Swift (Rackspace Cloud Files, Memset Memstore, OVH)
2264 \ "swift"
226511 / Yandex Disk
2266 \ "yandex"
2267Storage> 9
2268Microsoft App Client Id - leave blank normally.
2269client_id>
2270Microsoft App Client Secret - leave blank normally.
2271client_secret>
2272Remote config
2273Use auto config?
2274 * Say Y if not sure
2275 * Say N if you are working on a remote or headless machine
2276y) Yes
2277n) No
2278y/n> y
2279If your browser doesn't open automatically go to the following link: http://127.0.0.1:53682/auth
2280Log in and authorize rclone for access
2281Waiting for code...
2282Got code
2283--------------------
2284[remote]
2285client_id =
2286client_secret =
2287token = {"access_token":"XXXXXX"}
2288--------------------
2289y) Yes this is OK
2290e) Edit this remote
2291d) Delete this remote
2292y/e/d> y
2293See the remote setup docs for how to set it up on a machine with no Internet browser available.
2294
2295Note that rclone runs a webserver on your local machine to collect the token as returned from Microsoft. This only runs from the moment it opens your browser to the moment you get back the verification code. This is on http://127.0.0.1:53682/ and this it may require you to unblock it temporarily if you are running a host firewall.
2296
2297Once configured you can then use rclone like this,
2298
2299List directories in top level of your One Drive
2300
2301rclone lsd remote:
2302List all the files in your One Drive
2303
2304rclone ls remote:
2305To copy a local directory to an One Drive directory called backup
2306
2307rclone copy /home/source remote:backup
2308Modified time and hashes
2309
2310One Drive allows modification times to be set on objects accurate to 1 second. These will be used to detect whether objects need syncing or not.
2311
2312One drive supports SHA1 type hashes, so you can use --checksum flag.
2313
2314Deleting files
2315
2316Any files you delete with rclone will end up in the trash. Microsoft doesn't provide an API to permanently delete files, nor to empty the trash, so you will have to do that with one of Microsoft's apps or via the One Drive website.
2317
2318Specific options
2319
2320Here are the command line options specific to this cloud storage system.
2321
2322--onedrive-chunk-size=SIZE
2323
2324Above this size files will be chunked - must be multiple of 320k. The default is 10MB. Note that the chunks will be buffered into memory.
2325
2326--onedrive-upload-cutoff=SIZE
2327
2328Cutoff for switching to chunked upload - must be <= 100MB. The default is 10MB.
2329
2330Limitations
2331
2332Note that One Drive is case insensitive so you can't have a file called "Hello.doc" and one called "hello.doc".
2333
2334Rclone only supports your default One Drive, and doesn't work with One Drive for business. Both these issues may be fixed at some point depending on user demand!
2335
2336There are quite a few characters that can't be in One Drive file names. These can't occur on Windows platforms, but on non-Windows platforms they are common. Rclone will map these names to and from an identical looking unicode equivalent. For example if a file has a ? in it will be mapped to ? instead.
2337
2338Hubic
2339
2340Paths are specified as remote:path
2341
2342Paths are specified as remote:container (or remote: for the lsd command.) You may put subdirectories in too, eg remote:container/path/to/dir.
2343
2344The initial setup for Hubic involves getting a token from Hubic which you need to do in your browser. rclone config walks you through it.
2345
2346Here is an example of how to make a remote called remote. First run:
2347
2348 rclone config
2349This will guide you through an interactive setup process:
2350
2351n) New remote
2352s) Set configuration password
2353n/s> n
2354name> remote
2355Type of storage to configure.
2356Choose a number from below, or type in your own value
2357 1 / Amazon Drive
2358 \ "amazon cloud drive"
2359 2 / Amazon S3 (also Dreamhost, Ceph)
2360 \ "s3"
2361 3 / Backblaze B2
2362 \ "b2"
2363 4 / Dropbox
2364 \ "dropbox"
2365 5 / Google Cloud Storage (this is not Google Drive)
2366 \ "google cloud storage"
2367 6 / Google Drive
2368 \ "drive"
2369 7 / Hubic
2370 \ "hubic"
2371 8 / Local Disk
2372 \ "local"
2373 9 / Microsoft OneDrive
2374 \ "onedrive"
237510 / Openstack Swift (Rackspace Cloud Files, Memset Memstore, OVH)
2376 \ "swift"
237711 / Yandex Disk
2378 \ "yandex"
2379Storage> 7
2380Hubic Client Id - leave blank normally.
2381client_id>
2382Hubic Client Secret - leave blank normally.
2383client_secret>
2384Remote config
2385Use auto config?
2386 * Say Y if not sure
2387 * Say N if you are working on a remote or headless machine
2388y) Yes
2389n) No
2390y/n> y
2391If your browser doesn't open automatically go to the following link: http://127.0.0.1:53682/auth
2392Log in and authorize rclone for access
2393Waiting for code...
2394Got code
2395--------------------
2396[remote]
2397client_id =
2398client_secret =
2399token = {"access_token":"XXXXXX"}
2400--------------------
2401y) Yes this is OK
2402e) Edit this remote
2403d) Delete this remote
2404y/e/d> y
2405See the remote setup docs for how to set it up on a machine with no Internet browser available.
2406
2407Note that rclone runs a webserver on your local machine to collect the token as returned from Hubic. This only runs from the moment it opens your browser to the moment you get back the verification code. This is on http://127.0.0.1:53682/ and this it may require you to unblock it temporarily if you are running a host firewall.
2408
2409Once configured you can then use rclone like this,
2410
2411List containers in the top level of your Hubic
2412
2413rclone lsd remote:
2414List all the files in your Hubic
2415
2416rclone ls remote:
2417To copy a local directory to an Hubic directory called backup
2418
2419rclone copy /home/source remote:backup
2420If you want the directory to be visible in the official Hubic browser, you need to copy your files to the default directory
2421
2422rclone copy /home/source remote:default/backup
2423Modified time
2424
2425The modified time is stored as metadata on the object as X-Object-Meta-Mtime as floating point since the epoch accurate to 1 ns.
2426
2427This is a defacto standard (used in the official python-swiftclient amongst others) for storing the modification time for an object.
2428
2429Note that Hubic wraps the Swift backend, so most of the properties of are the same.
2430
2431Limitations
2432
2433This uses the normal OpenStack Swift mechanism to refresh the Swift API credentials and ignores the expires field returned by the Hubic API.
2434
2435The Swift API doesn't return a correct MD5SUM for segmented files (Dynamic or Static Large Objects) so rclone won't check or use the MD5SUM for these.
2436
2437Backblaze B2
2438
2439B2 is Backblaze's cloud storage system.
2440
2441Paths are specified as remote:bucket (or remote: for the lsd command.) You may put subdirectories in too, eg remote:bucket/path/to/dir.
2442
2443Here is an example of making a b2 configuration. First run
2444
2445rclone config
2446This will guide you through an interactive setup process. You will need your account number (a short hex number) and key (a long hex number) which you can get from the b2 control panel.
2447
2448No remotes found - make a new one
2449n) New remote
2450q) Quit config
2451n/q> n
2452name> remote
2453Type of storage to configure.
2454Choose a number from below, or type in your own value
2455 1 / Amazon Drive
2456 \ "amazon cloud drive"
2457 2 / Amazon S3 (also Dreamhost, Ceph)
2458 \ "s3"
2459 3 / Backblaze B2
2460 \ "b2"
2461 4 / Dropbox
2462 \ "dropbox"
2463 5 / Google Cloud Storage (this is not Google Drive)
2464 \ "google cloud storage"
2465 6 / Google Drive
2466 \ "drive"
2467 7 / Hubic
2468 \ "hubic"
2469 8 / Local Disk
2470 \ "local"
2471 9 / Microsoft OneDrive
2472 \ "onedrive"
247310 / Openstack Swift (Rackspace Cloud Files, Memset Memstore, OVH)
2474 \ "swift"
247511 / Yandex Disk
2476 \ "yandex"
2477Storage> 3
2478Account ID
2479account> 123456789abc
2480Application Key
2481key> 0123456789abcdef0123456789abcdef0123456789
2482Endpoint for the service - leave blank normally.
2483endpoint>
2484Remote config
2485--------------------
2486[remote]
2487account = 123456789abc
2488key = 0123456789abcdef0123456789abcdef0123456789
2489endpoint =
2490--------------------
2491y) Yes this is OK
2492e) Edit this remote
2493d) Delete this remote
2494y/e/d> y
2495This remote is called remote and can now be used like this
2496
2497See all buckets
2498
2499rclone lsd remote:
2500Make a new bucket
2501
2502rclone mkdir remote:bucket
2503List the contents of a bucket
2504
2505rclone ls remote:bucket
2506Sync /home/local/directory to the remote bucket, deleting any excess files in the bucket.
2507
2508rclone sync /home/local/directory remote:bucket
2509Modified time
2510
2511The modified time is stored as metadata on the object as X-Bz-Info-src_last_modified_millis as milliseconds since 1970-01-01 in the Backblaze standard. Other tools should be able to use this as a modified time.
2512
2513Modified times are used in syncing and are fully supported except in the case of updating a modification time on an existing object. In this case the object will be uploaded again as B2 doesn't have an API method to set the modification time independent of doing an upload.
2514
2515SHA1 checksums
2516
2517The SHA1 checksums of the files are checked on upload and download and will be used in the syncing process.
2518
2519Large files which are uploaded in chunks will store their SHA1 on the object as X-Bz-Info-large_file_sha1 as recommended by Backblaze.
2520
2521Transfers
2522
2523Backblaze recommends that you do lots of transfers simultaneously for maximum speed. In tests from my SSD equiped laptop the optimum setting is about --transfers 32 though higher numbers may be used for a slight speed improvement. The optimum number for you may vary depending on your hardware, how big the files are, how much you want to load your computer, etc. The default of --transfers 4 is definitely too low for Backblaze B2 though.
2524
2525Note that uploading big files (bigger than 200 MB by default) will use a 96 MB RAM buffer by default. There can be at most --transfers of these in use at any moment, so this sets the upper limit on the memory used.
2526
2527Versions
2528
2529When rclone uploads a new version of a file it creates a new version of it. Likewise when you delete a file, the old version will still be available.
2530
2531Old versions of files are visible using the --b2-versions flag.
2532
2533If you wish to remove all the old versions then you can use the rclone cleanup remote:bucket command which will delete all the old versions of files, leaving the current ones intact. You can also supply a path and only old versions under that path will be deleted, eg rclone cleanup remote:bucket/path/to/stuff.
2534
2535When you purge a bucket, the current and the old versions will be deleted then the bucket will be deleted.
2536
2537However delete will cause the current versions of the files to become hidden old versions.
2538
2539Here is a session showing the listing and and retreival of an old version followed by a cleanup of the old versions.
2540
2541Show current version and all the versions with --b2-versions flag.
2542
2543$ rclone -q ls b2:cleanup-test
2544 9 one.txt
2545
2546$ rclone -q --b2-versions ls b2:cleanup-test
2547 9 one.txt
2548 8 one-v2016-07-04-141032-000.txt
2549 16 one-v2016-07-04-141003-000.txt
2550 15 one-v2016-07-02-155621-000.txt
2551Retreive an old verson
2552
2553$ rclone -q --b2-versions copy b2:cleanup-test/one-v2016-07-04-141003-000.txt /tmp
2554
2555$ ls -l /tmp/one-v2016-07-04-141003-000.txt
2556-rw-rw-r-- 1 ncw ncw 16 Jul 2 17:46 /tmp/one-v2016-07-04-141003-000.txt
2557Clean up all the old versions and show that they've gone.
2558
2559$ rclone -q cleanup b2:cleanup-test
2560
2561$ rclone -q ls b2:cleanup-test
2562 9 one.txt
2563
2564$ rclone -q --b2-versions ls b2:cleanup-test
2565 9 one.txt
2566Specific options
2567
2568Here are the command line options specific to this cloud storage system.
2569
2570--b2-chunk-size valuee=SIZE
2571
2572When uploading large files chunk the file into this size. Note that these chunks are buffered in memory and there might a maximum of --transfers chunks in progress at once. 100,000,000 Bytes is the minimim size (default 96M).
2573
2574--b2-upload-cutoff=SIZE
2575
2576Cutoff for switching to chunked upload (default 190.735 MiB == 200 MB). Files above this size will be uploaded in chunks of --b2-chunk-size.
2577
2578This value should be set no larger than 4.657GiB (== 5GB) as this is the largest file size that can be uploaded.
2579
2580--b2-test-mode=FLAG
2581
2582This is for debugging purposes only.
2583
2584Setting FLAG to one of the strings below will cause b2 to return specific errors for debugging purposes.
2585
2586fail_some_uploads
2587expire_some_account_authorization_tokens
2588force_cap_exceeded
2589These will be set in the X-Bz-Test-Mode header which is documented in the b2 integrations checklist.
2590
2591--b2-versions
2592
2593When set rclone will show and act on older versions of files. For example
2594
2595Listing without --b2-versions
2596
2597$ rclone -q ls b2:cleanup-test
2598 9 one.txt
2599And with
2600
2601$ rclone -q --b2-versions ls b2:cleanup-test
2602 9 one.txt
2603 8 one-v2016-07-04-141032-000.txt
2604 16 one-v2016-07-04-141003-000.txt
2605 15 one-v2016-07-02-155621-000.txt
2606Showing that the current version is unchanged but older versions can be seen. These have the UTC date that they were uploaded to the server to the nearest millisecond appended to them.
2607
2608Note that when using --b2-versions no file write operations are permitted, so you can't upload files or delete them.
2609
2610Yandex Disk
2611
2612Yandex Disk is a cloud storage solution created by Yandex.
2613
2614Yandex paths may be as deep as required, eg remote:directory/subdirectory.
2615
2616Here is an example of making a yandex configuration. First run
2617
2618rclone config
2619This will guide you through an interactive setup process:
2620
2621No remotes found - make a new one
2622n) New remote
2623s) Set configuration password
2624n/s> n
2625name> remote
2626Type of storage to configure.
2627Choose a number from below, or type in your own value
2628 1 / Amazon Drive
2629 \ "amazon cloud drive"
2630 2 / Amazon S3 (also Dreamhost, Ceph)
2631 \ "s3"
2632 3 / Backblaze B2
2633 \ "b2"
2634 4 / Dropbox
2635 \ "dropbox"
2636 5 / Google Cloud Storage (this is not Google Drive)
2637 \ "google cloud storage"
2638 6 / Google Drive
2639 \ "drive"
2640 7 / Hubic
2641 \ "hubic"
2642 8 / Local Disk
2643 \ "local"
2644 9 / Microsoft OneDrive
2645 \ "onedrive"
264610 / Openstack Swift (Rackspace Cloud Files, Memset Memstore, OVH)
2647 \ "swift"
264811 / Yandex Disk
2649 \ "yandex"
2650Storage> 11
2651Yandex Client Id - leave blank normally.
2652client_id>
2653Yandex Client Secret - leave blank normally.
2654client_secret>
2655Remote config
2656Use auto config?
2657 * Say Y if not sure
2658 * Say N if you are working on a remote or headless machine
2659y) Yes
2660n) No
2661y/n> y
2662If your browser doesn't open automatically go to the following link: http://127.0.0.1:53682/auth
2663Log in and authorize rclone for access
2664Waiting for code...
2665Got code
2666--------------------
2667[remote]
2668client_id =
2669client_secret =
2670token = {"access_token":"xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx","token_type":"bearer","expiry":"2016-12-29T12:27:11.362788025Z"}
2671--------------------
2672y) Yes this is OK
2673e) Edit this remote
2674d) Delete this remote
2675y/e/d> y
2676See the remote setup docs for how to set it up on a machine with no Internet browser available.
2677
2678Note that rclone runs a webserver on your local machine to collect the token as returned from Yandex Disk. This only runs from the moment it opens your browser to the moment you get back the verification code. This is on http://127.0.0.1:53682/ and this it may require you to unblock it temporarily if you are running a host firewall.
2679
2680Once configured you can then use rclone like this,
2681
2682See top level directories
2683
2684rclone lsd remote:
2685Make a new directory
2686
2687rclone mkdir remote:directory
2688List the contents of a directory
2689
2690rclone ls remote:directory
2691Sync /home/local/directory to the remote path, deleting any excess files in the path.
2692
2693rclone sync /home/local/directory remote:directory
2694Modified time
2695
2696Modified times are supported and are stored accurate to 1 ns in custom metadata called rclone_modified in RFC3339 with nanoseconds format.
2697
2698MD5 checksums
2699
2700MD5 checksums are natively supported by Yandex Disk.
2701
2702Crypt
2703
2704The crypt remote encrypts and decrypts another remote.
2705
2706To use it first set up the underlying remote following the config instructions for that remote. You can also use a local pathname instead of a remote which will encrypt and decrypt from that directory which might be useful for encrypting onto a USB stick for example.
2707
2708First check your chosen remote is working - we'll call it remote:path in these docs. Note that anything inside remote:path will be encrypted and anything outside won't. This means that if you are using a bucket based remote (eg S3, B2, swift) then you should probably put the bucket in the remote s3:bucket. If you just use s3: then rclone will make encrypted bucket names too (if using file name encryption) which may or may not be what you want.
2709
2710Now configure crypt using rclone config. We will call this one secret to differentiate it from the remote.
2711
2712No remotes found - make a new one
2713n) New remote
2714s) Set configuration password
2715q) Quit config
2716n/s/q> n
2717name> secret
2718Type of storage to configure.
2719Choose a number from below, or type in your own value
2720 1 / Amazon Drive
2721 \ "amazon cloud drive"
2722 2 / Amazon S3 (also Dreamhost, Ceph, Minio)
2723 \ "s3"
2724 3 / Backblaze B2
2725 \ "b2"
2726 4 / Dropbox
2727 \ "dropbox"
2728 5 / Encrypt/Decrypt a remote
2729 \ "crypt"
2730 6 / Google Cloud Storage (this is not Google Drive)
2731 \ "google cloud storage"
2732 7 / Google Drive
2733 \ "drive"
2734 8 / Hubic
2735 \ "hubic"
2736 9 / Local Disk
2737 \ "local"
273810 / Microsoft OneDrive
2739 \ "onedrive"
274011 / Openstack Swift (Rackspace Cloud Files, Memset Memstore, OVH)
2741 \ "swift"
274212 / Yandex Disk
2743 \ "yandex"
2744Storage> 5
2745Remote to encrypt/decrypt.
2746remote> remote:path
2747How to encrypt the filenames.
2748Choose a number from below, or type in your own value
2749 1 / Don't encrypt the file names. Adds a ".bin" extension only.
2750 \ "off"
2751 2 / Encrypt the filenames see the docs for the details.
2752 \ "standard"
2753filename_encryption> 2
2754Password or pass phrase for encryption.
2755y) Yes type in my own password
2756g) Generate random password
2757y/g> y
2758Enter the password:
2759password:
2760Confirm the password:
2761password:
2762Password or pass phrase for salt. Optional but recommended.
2763Should be different to the previous password.
2764y) Yes type in my own password
2765g) Generate random password
2766n) No leave this optional password blank
2767y/g/n> g
2768Password strength in bits.
276964 is just about memorable
2770128 is secure
27711024 is the maximum
2772Bits> 128
2773Your password is: JAsJvRcgR-_veXNfy_sGmQ
2774Use this password?
2775y) Yes
2776n) No
2777y/n> y
2778Remote config
2779--------------------
2780[secret]
2781remote = remote:path
2782filename_encryption = standard
2783password = CfDxopZIXFG0Oo-ac7dPLWWOHkNJbw
2784password2 = HYUpfuzHJL8qnX9fOaIYijq0xnVLwyVzp3y4SF3TwYqAU6HLysk
2785--------------------
2786y) Yes this is OK
2787e) Edit this remote
2788d) Delete this remote
2789y/e/d> y
2790Important The password is stored in the config file is lightly obscured so it isn't immediately obvious what it is. It is in no way secure unless you use config file encryption.
2791
2792A long passphrase is recommended, or you can use a random one. Note that if you reconfigure rclone with the same passwords/passphrases elsewhere it will be compatible - all the secrets used are derived from those two passwords/passphrases.
2793
2794Note that rclone does not encrypt * file length - this can be calcuated within 16 bytes * modification time - used for syncing
2795
2796Example
2797
2798To test I made a little directory of files using "standard" file name encryption.
2799
2800plaintext/
2801+-- file0.txt
2802+-- file1.txt
2803+-- subdir
2804 +-- file2.txt
2805 +-- file3.txt
2806 +-- subsubdir
2807 +-- file4.txt
2808Copy these to the remote and list them back
2809
2810$ rclone -q copy plaintext secret:
2811$ rclone -q ls secret:
2812 7 file1.txt
2813 6 file0.txt
2814 8 subdir/file2.txt
2815 10 subdir/subsubdir/file4.txt
2816 9 subdir/file3.txt
2817Now see what that looked like when encrypted
2818
2819$ rclone -q ls remote:path
2820 55 hagjclgavj2mbiqm6u6cnjjqcg
2821 54 v05749mltvv1tf4onltun46gls
2822 57 86vhrsv86mpbtd3a0akjuqslj8/dlj7fkq4kdq72emafg7a7s41uo
2823 58 86vhrsv86mpbtd3a0akjuqslj8/7uu829995du6o42n32otfhjqp4/b9pausrfansjth5ob3jkdqd4lc
2824 56 86vhrsv86mpbtd3a0akjuqslj8/8njh1sk437gttmep3p70g81aps
2825Note that this retains the directory structure which means you can do this
2826
2827$ rclone -q ls secret:subdir
2828 8 file2.txt
2829 9 file3.txt
2830 10 subsubdir/file4.txt
2831If don't use file name encryption then the remote will look like this - note the .bin extensions added to prevent the cloud provider attempting to interpret the data.
2832
2833$ rclone -q ls remote:path
2834 54 file0.txt.bin
2835 57 subdir/file3.txt.bin
2836 56 subdir/file2.txt.bin
2837 58 subdir/subsubdir/file4.txt.bin
2838 55 file1.txt.bin
2839File name encryption modes
2840
2841Here are some of the features of the file name encryption modes
2842
2843Off * doesn't hide file names or directory structure * allows for longer file names (~246 characters) * can use sub paths and copy single files
2844
2845Standard * file names encrypted * file names can't be as long (~156 characters) * can use sub paths and copy single files * directory structure visibile * identical files names will have identical uploaded names * can use shortcuts to shorten the directory recursion
2846
2847Cloud storage systems have various limits on file name length and total path length which you are more likely to hit using "Standard" file name encryption. If you keep your file names to below 156 characters in length then you should be OK on all providers.
2848
2849There may be an even more secure file name encryption mode in the future which will address the long file name problem.
2850
2851File formats
2852
2853File encryption
2854
2855Files are encrypted 1:1 source file to destination object. The file has a header and is divided into chunks.
2856
2857Header
2858
28598 bytes magic string RCLONE\x00\x00
286024 bytes Nonce (IV)
2861The initial nonce is generated from the operating systems crypto strong random number genrator. The nonce is incremented for each chunk read making sure each nonce is unique for each block written. The chance of a nonce being re-used is miniscule. If you wrote an exabyte of data (10�8 bytes) you would have a probability of approximately 2�10?�� of re-using a nonce.
2862
2863Chunk
2864
2865Each chunk will contain 64kB of data, except for the last one which may have less data. The data chunk is in standard NACL secretbox format. Secretbox uses XSalsa20 and Poly1305 to encrypt and authenticate messages.
2866
2867Each chunk contains:
2868
286916 Bytes of Poly1305 authenticator
28701 - 65536 bytes XSalsa20 encrypted data
287164k chunk size was chosen as the best performing chunk size (the authenticator takes too much time below this and the performance drops off due to cache effects above this). Note that these chunks are buffered in memory so they can't be too big.
2872
2873This uses a 32 byte (256 bit key) key derived from the user password.
2874
2875Examples
2876
28771 byte file will encrypt to
2878
287932 bytes header
288017 bytes data chunk
288149 bytes total
2882
28831MB (1048576 bytes) file will encrypt to
2884
288532 bytes header
288616 chunks of 65568 bytes
28871049120 bytes total (a 0.05% overhead). This is the overhead for big files.
2888
2889Name encryption
2890
2891File names are encrypted segment by segment - the path is broken up into / separated strings and these are encrypted individually.
2892
2893File segments are padded using using PKCS#7 to a multiple of 16 bytes before encryption.
2894
2895They are then encrypted with EME using AES with 256 bit key. EME (ECB-Mix-ECB) is a wide-block encryption mode presented in the 2003 paper "A Parallelizable Enciphering Mode" by Halevi and Rogaway.
2896
2897This makes for determinstic encryption which is what we want - the same filename must encrypt to the same thing otherwise we can't find it on the cloud storage system.
2898
2899This means that
2900
2901filenames with the same name will encrypt the same
2902filenames which start the same won't have a common prefix
2903This uses a 32 byte key (256 bits) and a 16 byte (128 bits) IV both of which are derived from the user password.
2904
2905After encryption they are written out using a modified version of standard base32 encoding as described in RFC4648. The standard encoding is modified in two ways:
2906
2907it becomes lower case (no-one likes upper case filenames!)
2908we strip the padding character =
2909base32 is used rather than the more efficient base64 so rclone can be used on case insensitive remotes (eg Windows, Amazon Drive).
2910
2911Key derivation
2912
2913Rclone uses scrypt with parameters N=16384, r=8, p=1 with a an optional user supplied salt (password2) to derive the 32+32+16 = 80 bytes of key material required. If the user doesn't supply a salt then rclone uses an internal one.
2914
2915scrypt makes it impractical to mount a dictionary attack on rclone encrypted data. For full protection agains this you should always use a salt.
2916
2917Local Filesystem
2918
2919Local paths are specified as normal filesystem paths, eg /path/to/wherever, so
2920
2921rclone sync /home/source /tmp/destination
2922Will sync /home/source to /tmp/destination
2923
2924These can be configured into the config file for consistencies sake, but it is probably easier not to.
2925
2926Modified time
2927
2928Rclone reads and writes the modified time using an accuracy determined by the OS. Typically this is 1ns on Linux, 10 ns on Windows and 1 Second on OS X.
2929
2930Filenames
2931
2932Filenames are expected to be encoded in UTF-8 on disk. This is the normal case for Windows and OS X.
2933
2934There is a bit more uncertainty in the Linux world, but new distributions will have UTF-8 encoded files names. If you are using an old Linux filesystem with non UTF-8 file names (eg latin1) then you can use the convmv tool to convert the filesystem to UTF-8. This tool is available in most distributions' package managers.
2935
2936If an invalid (non-UTF8) filename is read, the invalid caracters will be replaced with the unicode replacement character, '?'. rclone will emit a debug message in this case (use -v to see), eg
2937
2938Local file system at .: Replacing invalid UTF-8 characters in "gro\xdf"
2939Long paths on Windows
2940
2941Rclone handles long paths automatically, by converting all paths to long UNC paths which allows paths up to 32,767 characters.
2942
2943This is why you will see that your paths, for instance c:\files is converted to the UNC path \\?\c:\files in the output, and \\server\share is converted to \\?\UNC\server\share.
2944
2945However, in rare cases this may cause problems with buggy file system drivers like EncFS. To disable UNC conversion globally, add this to your .rclone.conf file:
2946
2947[local]
2948nounc = true
2949If you want to selectively disable UNC, you can add it to a separate entry like this:
2950
2951[nounc]
2952type = local
2953nounc = true
2954And use rclone like this:
2955
2956rclone copy c:\src nounc:z:\dst
2957
2958This will use UNC paths on c:\src but not on z:\dst. Of course this will cause problems if the absolute path length of a file exceeds 258 characters on z, so only use this option if you have to.
2959
2960Changelog
2961
2962v1.33 - 2016-08-24
2963New Features
2964Implement encryption
2965data encrypted in NACL secretbox format
2966with optional file name encryption
2967New commands
2968rclone mount - implements FUSE mounting of remotes (EXPERIMENTAL)
2969works on Linux, FreeBSD and OS X (need testers for the last 2!)
2970rclone cat - outputs remote file or files to the terminal
2971rclone genautocomplete - command to make a bash completion script for rclone
2972Editing a remote using rclone config now goes through the wizard
2973Compile with go 1.7 - this fixes rclone on macOS Sierra and on 386 processors
2974Use cobra for sub commands and docs generation
2975drive
2976Document how to make your own client_id
2977s3
2978User-configurable Amazon S3 ACL (thanks Radek �enfeld)
2979b2
2980Fix stats accounting for upload - no more jumping to 100% done
2981On cleanup delete hide marker if it is the current file
2982New B2 API endpoint (thanks Per Cederberg)
2983Set maximum backoff to 5 Minutes
2984onedrive
2985Fix URL escaping in file names - eg uploading files with + in them.
2986amazon cloud drive
2987Fix token expiry during large uploads
2988Work around 408 REQUEST_TIMEOUT and 504 GATEWAY_TIMEOUT errors
2989local
2990Fix filenames with invalid UTF-8 not being uploaded
2991Fix problem with some UTF-8 characters on OS X
2992v1.32 - 2016-07-13
2993Backblaze B2
2994Fix upload of files large files not in root
2995v1.31 - 2016-07-13
2996New Features
2997Reduce memory on sync by about 50%
2998Implement --no-traverse flag to stop copy traversing the destination remote.
2999This can be used to reduce memory usage down to the smallest possible.
3000Useful to copy a small number of files into a large destination folder.
3001Implement cleanup command for emptying trash / removing old versions of files
3002Currently B2 only
3003Single file handling improved
3004Now copied with --files-from
3005Automatically sets --no-traverse when copying a single file
3006Info on using installing with ansible - thanks Stefan Weichinger
3007Implement --no-update-modtime flag to stop rclone fixing the remote modified times.
3008Bug Fixes
3009Fix move command - stop it running for overlapping Fses - this was causing data loss.
3010Local
3011Fix incomplete hashes - this was causing problems for B2.
3012Amazon Drive
3013Rename Amazon Cloud Drive to Amazon Drive - no changes to config file needed.
3014Swift
3015Add support for non-default project domain - thanks Antonio Messina.
3016S3
3017Add instructions on how to use rclone with minio.
3018Add ap-northeast-2 (Seoul) and ap-south-1 (Mumbai) regions.
3019Skip setting the modified time for objects > 5GB as it isn't possible.
3020Backblaze B2
3021Add --b2-versions flag so old versions can be listed and retreived.
3022Treat 403 errors (eg cap exceeded) as fatal.
3023Implement cleanup command for deleting old file versions.
3024Make error handling compliant with B2 integrations notes.
3025Fix handling of token expiry.
3026Implement --b2-test-mode to set X-Bz-Test-Mode header.
3027Set cutoff for chunked upload to 200MB as per B2 guidelines.
3028Make upload multi-threaded.
3029Dropbox
3030Don't retry 461 errors.
3031v1.30 - 2016-06-18
3032New Features
3033Directory listing code reworked for more features and better error reporting (thanks to Klaus Post for help). This enables
3034Directory include filtering for efficiency
3035--max-depth parameter
3036Better error reporting
3037More to come
3038Retry more errors
3039Add --ignore-size flag - for uploading images to onedrive
3040Log -v output to stdout by default
3041Display the transfer stats in more human readable form
3042Make 0 size files specifiable with --max-size 0b
3043Add b suffix so we can specify bytes in --bwlimit, --min-size etc
3044Use "password:" instead of "password>" prompt - thanks Klaus Post and Leigh Klotz
3045Bug Fixes
3046Fix retry doing one too many retries
3047Local
3048Fix problems with OS X and UTF-8 characters
3049Amazon Drive
3050Check a file exists before uploading to help with 408 Conflict errors
3051Reauth on 401 errors - this has been causing a lot of problems
3052Work around spurious 403 errors
3053Restart directory listings on error
3054Google Drive
3055Check a file exists before uploading to help with duplicates
3056Fix retry of multipart uploads
3057Backblaze B2
3058Implement large file uploading
3059S3
3060Add AES256 server-side encryption for - thanks Justin R. Wilson
3061Google Cloud Storage
3062Make sure we don't use conflicting content types on upload
3063Add service account support - thanks Michal Witkowski
3064Swift
3065Add auth version parameter
3066Add domain option for openstack (v3 auth) - thanks Fabian Ruff
3067v1.29 - 2016-04-18
3068New Features
3069Implement -I, --ignore-times for unconditional upload
3070Improve dedupecommand
3071Now removes identical copies without asking
3072Now obeys --dry-run
3073Implement --dedupe-mode for non interactive running
3074--dedupe-mode interactive - interactive the default.
3075--dedupe-mode skip - removes identical files then skips anything left.
3076--dedupe-mode first - removes identical files then keeps the first one.
3077--dedupe-mode newest - removes identical files then keeps the newest one.
3078--dedupe-mode oldest - removes identical files then keeps the oldest one.
3079--dedupe-mode rename - removes identical files then renames the rest to be different.
3080Bug fixes
3081Make rclone check obey the --size-only flag.
3082Use "application/octet-stream" if discovered mime type is invalid.
3083Fix missing "quit" option when there are no remotes.
3084Google Drive
3085Increase default chunk size to 8 MB - increases upload speed of big files
3086Speed up directory listings and make more reliable
3087Add missing retries for Move and DirMove - increases reliability
3088Preserve mime type on file update
3089Backblaze B2
3090Enable mod time syncing
3091This means that B2 will now check modification times
3092It will upload new files to update the modification times
3093(there isn't an API to just set the mod time.)
3094If you want the old behaviour use --size-only.
3095Update API to new version
3096Fix parsing of mod time when not in metadata
3097Swift/Hubic
3098Don't return an MD5SUM for static large objects
3099S3
3100Fix uploading files bigger than 50GB
3101v1.28 - 2016-03-01
3102New Features
3103Configuration file encryption - thanks Klaus Post
3104Improve rclone config adding more help and making it easier to understand
3105Implement -u/--update so creation times can be used on all remotes
3106Implement --low-level-retries flag
3107Optionally disable gzip compression on downloads with --no-gzip-encoding
3108Bug fixes
3109Don't make directories if --dry-run set
3110Fix and document the move command
3111Fix redirecting stderr on unix-like OSes when using --log-file
3112Fix delete command to wait until all finished - fixes missing deletes.
3113Backblaze B2
3114Use one upload URL per go routine fixes more than one upload using auth token
3115Add pacing, retries and reauthentication - fixes token expiry problems
3116Upload without using a temporary file from local (and remotes which support SHA1)
3117Fix reading metadata for all files when it shouldn't have been
3118Drive
3119Fix listing drive documents at root
3120Disable copy and move for Google docs
3121Swift
3122Fix uploading of chunked files with non ASCII characters
3123Allow setting of storage_url in the config - thanks Xavier Lucas
3124S3
3125Allow IAM role and credentials from environment variables - thanks Brian Stengaard
3126Allow low privilege users to use S3 (check if directory exists during Mkdir) - thanks Jakub Gedeon
3127Amazon Drive
3128Retry on more things to make directory listings more reliable
3129v1.27 - 2016-01-31
3130New Features
3131Easier headless configuration with rclone authorize
3132Add support for multiple hash types - we now check SHA1 as well as MD5 hashes.
3133delete command which does obey the filters (unlike purge)
3134dedupe command to deduplicate a remote. Useful with Google Drive.
3135Add --ignore-existing flag to skip all files that exist on destination.
3136Add --delete-before, --delete-during, --delete-after flags.
3137Add --memprofile flag to debug memory use.
3138Warn the user about files with same name but different case
3139Make --include rules add their implict exclude * at the end of the filter list
3140Deprecate compiling with go1.3
3141Amazon Drive
3142Fix download of files > 10 GB
3143Fix directory traversal ("Next token is expired") for large directory listings
3144Remove 409 conflict from error codes we will retry - stops very long pauses
3145Backblaze B2
3146SHA1 hashes now checked by rclone core
3147Drive
3148Add --drive-auth-owner-only to only consider files owned by the user - thanks Bj�rn Harrtell
3149Export Google documents
3150Dropbox
3151Make file exclusion error controllable with -q
3152Swift
3153Fix upload from unprivileged user.
3154S3
3155Fix updating of mod times of files with + in.
3156Local
3157Add local file system option to disable UNC on Windows.
3158v1.26 - 2016-01-02
3159New Features
3160Yandex storage backend - thank you Dmitry Burdeev ("dibu")
3161Implement Backblaze B2 storage backend
3162Add --min-age and --max-age flags - thank you Adriano Aur�lio Meirelles
3163Make ls/lsl/md5sum/size/check obey includes and excludes
3164Fixes
3165Fix crash in http logging
3166Upload releases to github too
3167Swift
3168Fix sync for chunked files
3169One Drive
3170Re-enable server side copy
3171Don't mask HTTP error codes with JSON decode error
3172S3
3173Fix corrupting Content-Type on mod time update (thanks Joseph Spurrier)
3174v1.25 - 2015-11-14
3175New features
3176Implement Hubic storage system
3177Fixes
3178Fix deletion of some excluded files without --delete-excluded
3179This could have deleted files unexpectedly on sync
3180Always check first with --dry-run!
3181Swift
3182Stop SetModTime losing metadata (eg X-Object-Manifest)
3183This could have caused data loss for files > 5GB in size
3184Use ContentType from Object to avoid lookups in listings
3185One Drive
3186disable server side copy as it seems to be broken at Microsoft
3187v1.24 - 2015-11-07
3188New features
3189Add support for Microsoft One Drive
3190Add --no-check-certificate option to disable server certificate verification
3191Add async readahead buffer for faster transfer of big files
3192Fixes
3193Allow spaces in remotes and check remote names for validity at creation time
3194Allow '&' and disallow ':' in Windows filenames.
3195Swift
3196Ignore directory marker objects where appropriate - allows working with Hubic
3197Don't delete the container if fs wasn't at root
3198S3
3199Don't delete the bucket if fs wasn't at root
3200Google Cloud Storage
3201Don't delete the bucket if fs wasn't at root
3202v1.23 - 2015-10-03
3203New features
3204Implement rclone size for measuring remotes
3205Fixes
3206Fix headless config for drive and gcs
3207Tell the user they should try again if the webserver method failed
3208Improve output of --dump-headers
3209S3
3210Allow anonymous access to public buckets
3211Swift
3212Stop chunked operations logging "Failed to read info: Object Not Found"
3213Use Content-Length on uploads for extra reliability
3214v1.22 - 2015-09-28
3215Implement rsync like include and exclude flags
3216swift
3217Support files > 5GB - thanks Sergey Tolmachev
3218v1.21 - 2015-09-22
3219New features
3220Display individual transfer progress
3221Make lsl output times in localtime
3222Fixes
3223Fix allowing user to override credentials again in Drive, GCS and ACD
3224Amazon Drive
3225Implement compliant pacing scheme
3226Google Drive
3227Make directory reads concurrent for increased speed.
3228v1.20 - 2015-09-15
3229New features
3230Amazon Drive support
3231Oauth support redone - fix many bugs and improve usability
3232Use "golang.org/x/oauth2" as oauth libary of choice
3233Improve oauth usability for smoother initial signup
3234drive, googlecloudstorage: optionally use auto config for the oauth token
3235Implement --dump-headers and --dump-bodies debug flags
3236Show multiple matched commands if abbreviation too short
3237Implement server side move where possible
3238local
3239Always use UNC paths internally on Windows - fixes a lot of bugs
3240dropbox
3241force use of our custom transport which makes timeouts work
3242Thanks to Klaus Post for lots of help with this release
3243v1.19 - 2015-08-28
3244New features
3245Server side copies for s3/swift/drive/dropbox/gcs
3246Move command - uses server side copies if it can
3247Implement --retries flag - tries 3 times by default
3248Build for plan9/amd64 and solaris/amd64 too
3249Fixes
3250Make a current version download with a fixed URL for scripting
3251Ignore rmdir in limited fs rather than throwing error
3252dropbox
3253Increase chunk size to improve upload speeds massively
3254Issue an error message when trying to upload bad file name
3255v1.18 - 2015-08-17
3256drive
3257Add --drive-use-trash flag so rclone trashes instead of deletes
3258Add "Forbidden to download" message for files with no downloadURL
3259dropbox
3260Remove datastore
3261This was deprecated and it caused a lot of problems
3262Modification times and MD5SUMs no longer stored
3263Fix uploading files > 2GB
3264s3
3265use official AWS SDK from github.com/aws/aws-sdk-go
3266NB will most likely require you to delete and recreate remote
3267enable multipart upload which enables files > 5GB
3268tested with Ceph / RadosGW / S3 emulation
3269many thanks to Sam Liston and Brian Haymore at the Utah Center for High Performance Computing for a Ceph test account
3270misc
3271Show errors when reading the config file
3272Do not print stats in quiet mode - thanks Leonid Shalupov
3273Add FAQ
3274Fix created directories not obeying umask
3275Linux installation instructions - thanks Shimon Doodkin
3276v1.17 - 2015-06-14
3277dropbox: fix case insensitivity issues - thanks Leonid Shalupov
3278v1.16 - 2015-06-09
3279Fix uploading big files which was causing timeouts or panics
3280Don't check md5sum after download with --size-only
3281v1.15 - 2015-06-06
3282Add --checksum flag to only discard transfers by MD5SUM - thanks Alex Couper
3283Implement --size-only flag to sync on size not checksum & modtime
3284Expand docs and remove duplicated information
3285Document rclone's limitations with directories
3286dropbox: update docs about case insensitivity
3287v1.14 - 2015-05-21
3288local: fix encoding of non utf-8 file names - fixes a duplicate file problem
3289drive: docs about rate limiting
3290google cloud storage: Fix compile after API change in "google.golang.org/api/storage/v1"
3291v1.13 - 2015-05-10
3292Revise documentation (especially sync)
3293Implement --timeout and --conntimeout
3294s3: ignore etags from multipart uploads which aren't md5sums
3295v1.12 - 2015-03-15
3296drive: Use chunked upload for files above a certain size
3297drive: add --drive-chunk-size and --drive-upload-cutoff parameters
3298drive: switch to insert from update when a failed copy deletes the upload
3299core: Log duplicate files if they are detected
3300v1.11 - 2015-03-04
3301swift: add region parameter
3302drive: fix crash on failed to update remote mtime
3303In remote paths, change native directory separators to /
3304Add synchronization to ls/lsl/lsd output to stop corruptions
3305Ensure all stats/log messages to go stderr
3306Add --log-file flag to log everything (including panics) to file
3307Make it possible to disable stats printing with --stats=0
3308Implement --bwlimit to limit data transfer bandwidth
3309v1.10 - 2015-02-12
3310s3: list an unlimited number of items
3311Fix getting stuck in the configurator
3312v1.09 - 2015-02-07
3313windows: Stop drive letters (eg C:) getting mixed up with remotes (eg drive:)
3314local: Fix directory separators on Windows
3315drive: fix rate limit exceeded errors
3316v1.08 - 2015-02-04
3317drive: fix subdirectory listing to not list entire drive
3318drive: Fix SetModTime
3319dropbox: adapt code to recent library changes
3320v1.07 - 2014-12-23
3321google cloud storage: fix memory leak
3322v1.06 - 2014-12-12
3323Fix "Couldn't find home directory" on OSX
3324swift: Add tenant parameter
3325Use new location of Google API packages
3326v1.05 - 2014-08-09
3327Improved tests and consequently lots of minor fixes
3328core: Fix race detected by go race detector
3329core: Fixes after running errcheck
3330drive: reset root directory on Rmdir and Purge
3331fs: Document that Purger returns error on empty directory, test and fix
3332google cloud storage: fix ListDir on subdirectory
3333google cloud storage: re-read metadata in SetModTime
3334s3: make reading metadata more reliable to work around eventual consistency problems
3335s3: strip trailing / from ListDir()
3336swift: return directories without / in ListDir
3337v1.04 - 2014-07-21
3338google cloud storage: Fix crash on Update
3339v1.03 - 2014-07-20
3340swift, s3, dropbox: fix updated files being marked as corrupted
3341Make compile with go 1.1 again
3342v1.02 - 2014-07-19
3343Implement Dropbox remote
3344Implement Google Cloud Storage remote
3345Verify Md5sums and Sizes after copies
3346Remove times from "ls" command - lists sizes only
3347Add add "lsl" - lists times and sizes
3348Add "md5sum" command
3349v1.01 - 2014-07-04
3350drive: fix transfer of big files using up lots of memory
3351v1.00 - 2014-07-03
3352drive: fix whole second dates
3353v0.99 - 2014-06-26
3354Fix --dry-run not working
3355Make compatible with go 1.1
3356v0.98 - 2014-05-30
3357s3: Treat missing Content-Length as 0 for some ceph installations
3358rclonetest: add file with a space in
3359v0.97 - 2014-05-05
3360Implement copying of single files
3361s3 & swift: support paths inside containers/buckets
3362v0.96 - 2014-04-24
3363drive: Fix multiple files of same name being created
3364drive: Use o.Update and fs.Put to optimise transfers
3365Add version number, -V and --version
3366v0.95 - 2014-03-28
3367rclone.org: website, docs and graphics
3368drive: fix path parsing
3369v0.94 - 2014-03-27
3370Change remote format one last time
3371GNU style flags
3372v0.93 - 2014-03-16
3373drive: store token in config file
3374cross compile other versions
3375set strict permissions on config file
3376v0.92 - 2014-03-15
3377Config fixes and --config option
3378v0.91 - 2014-03-15
3379Make config file
3380v0.90 - 2013-06-27
3381Project named rclone
3382v0.00 - 2012-11-18
3383Project started
3384Bugs and Limitations
3385
3386Empty directories are left behind / not created
3387
3388With remotes that have a concept of directory, eg Local and Drive, empty directories may be left behind, or not created when one was expected.
3389
3390This is because rclone doesn't have a concept of a directory - it only works on objects. Most of the object storage systems can't actually store a directory so there is nowhere for rclone to store anything about directories.
3391
3392You can work round this to some extent with thepurge command which will delete everything under the path, inluding empty directories.
3393
3394This may be fixed at some point in Issue #100
3395
3396Directory timestamps aren't preserved
3397
3398For the same reason as the above, rclone doesn't have a concept of a directory - it only works on objects, therefore it can't preserve the timestamps of directories.
3399
3400Frequently Asked Questions
3401
3402Do all cloud storage systems support all rclone commands
3403
3404Yes they do. All the rclone commands (eg sync, copy etc) will work on all the remote storage systems.
3405
3406Can I copy the config from one machine to another
3407
3408Sure! Rclone stores all of its config in a single file. If you want to find this file, the simplest way is to run rclone -h and look at the help for the --config flag which will tell you where it is.
3409
3410See the remote setup docs for more info.
3411
3412How do I configure rclone on a remote / headless box with no browser?
3413
3414This has now been documented in its own remote setup page.
3415
3416Can rclone sync directly from drive to s3
3417
3418Rclone can sync between two remote cloud storage systems just fine.
3419
3420Note that it effectively downloads the file and uploads it again, so the node running rclone would need to have lots of bandwidth.
3421
3422The syncs would be incremental (on a file by file basis).
3423
3424Eg
3425
3426rclone sync drive:Folder s3:bucket
3427Using rclone from multiple locations at the same time
3428
3429You can use rclone from multiple places at the same time if you choose different subdirectory for the output, eg
3430
3431Server A> rclone sync /tmp/whatever remote:ServerA
3432Server B> rclone sync /tmp/whatever remote:ServerB
3433If you sync to the same directory then you should use rclone copy otherwise the two rclones may delete each others files, eg
3434
3435Server A> rclone copy /tmp/whatever remote:Backup
3436Server B> rclone copy /tmp/whatever remote:Backup
3437The file names you upload from Server A and Server B should be different in this case, otherwise some file systems (eg Drive) may make duplicates.
3438
3439Why doesn't rclone support partial transfers / binary diffs like rsync?
3440
3441Rclone stores each file you transfer as a native object on the remote cloud storage system. This means that you can see the files you upload as expected using alternative access methods (eg using the Google Drive web interface). There is a 1:1 mapping between files on your hard disk and objects created in the cloud storage system.
3442
3443Cloud storage systems (at least none I've come across yet) don't support partially uploading an object. You can't take an existing object, and change some bytes in the middle of it.
3444
3445It would be possible to make a sync system which stored binary diffs instead of whole objects like rclone does, but that would break the 1:1 mapping of files on your hard disk to objects in the remote cloud storage system.
3446
3447All the cloud storage systems support partial downloads of content, so it would be possible to make partial downloads work. However to make this work efficiently this would require storing a significant amount of metadata, which breaks the desired 1:1 mapping of files to objects.
3448
3449Can rclone do bi-directional sync?
3450
3451No, not at present. rclone only does uni-directional sync from A -> B. It may do in the future though since it has all the primitives - it just requires writing the algorithm to do it.
3452
3453Can I use rclone with an HTTP proxy?
3454
3455Yes. rclone will use the environment variables HTTP_PROXY, HTTPS_PROXY and NO_PROXY, similar to cURL and other programs.
3456
3457HTTPS_PROXY takes precedence over HTTP_PROXY for https requests.
3458
3459The environment values may be either a complete URL or a "host[:port]", in which case the "http" scheme is assumed.
3460
3461The NO_PROXY allows you to disable the proxy for specific hosts. Hosts must be comma separated, and can contain domains or parts. For instance "foo.com" also matches "bar.foo.com".
3462
3463Rclone gives x509: failed to load system roots and no roots provided error
3464
3465This means that rclone can't file the SSL root certificates. Likely you are running rclone on a NAS with a cut-down Linux OS, or possibly on Solaris.
3466
3467Rclone (via the Go runtime) tries to load the root certificates from these places on Linux.
3468
3469"/etc/ssl/certs/ca-certificates.crt", // Debian/Ubuntu/Gentoo etc.
3470"/etc/pki/tls/certs/ca-bundle.crt", // Fedora/RHEL
3471"/etc/ssl/ca-bundle.pem", // OpenSUSE
3472"/etc/pki/tls/cacert.pem", // OpenELEC
3473So doing something like this should fix the problem. It also sets the time which is important for SSL to work properly.
3474
3475mkdir -p /etc/ssl/certs/
3476curl -o /etc/ssl/certs/ca-certificates.crt https://raw.githubusercontent.com/bagder/ca-bundle/master/ca-bundle.crt
3477ntpclient -s -h pool.ntp.org
3478Note that you may need to add the --insecure option to the curl command line if it doesn't work without.
3479
3480curl --insecure -o /etc/ssl/certs/ca-certificates.crt https://raw.githubusercontent.com/bagder/ca-bundle/master/ca-bundle.crt
3481Rclone gives Failed to load config file: function not implemented error
3482
3483Likely this means that you are running rclone on Linux version not supported by the go runtime, ie earlier than version 2.6.23.
3484
3485See the system requirements section in the go install docs for full details.
3486
3487All my uploaded docx/xlsx/pptx files appear as archive/zip
3488
3489This is caused by uploading these files from a Windows computer which hasn't got the Microsoft Office suite installed. The easiest way to fix is to install the Word viewer and the Microsoft Office Compatibility Pack for Word, Excel, and PowerPoint 2007 and later versions' file formats
3490
3491License
3492
3493This is free software under the terms of MIT the license (check the COPYING file included with the source code).
3494
3495Copyright (C) 2012 by Nick Craig-Wood http://www.craig-wood.com/nick/
3496
3497Permission is hereby granted, free of charge, to any person obtaining a copy
3498of this software and associated documentation files (the "Software"), to deal
3499in the Software without restriction, including without limitation the rights
3500to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
3501copies of the Software, and to permit persons to whom the Software is
3502furnished to do so, subject to the following conditions:
3503
3504The above copyright notice and this permission notice shall be included in
3505all copies or substantial portions of the Software.
3506
3507THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
3508IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
3509FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
3510AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
3511LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
3512OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
3513THE SOFTWARE.
3514Authors
3515
3516Nick Craig-Wood nick@craig-wood.com
3517Contributors
3518
3519Alex Couper amcouper@gmail.com
3520Leonid Shalupov leonid@shalupov.com
3521Shimon Doodkin helpmepro1@gmail.com
3522Colin Nicholson colin@colinn.com
3523Klaus Post klauspost@gmail.com
3524Sergey Tolmachev tolsi.ru@gmail.com
3525Adriano Aur�lio Meirelles adriano@atinge.com
3526C. Bess cbess@users.noreply.github.com
3527Dmitry Burdeev dibu28@gmail.com
3528Joseph Spurrier github@josephspurrier.com
3529Bj�rn Harrtell bjorn@wololo.org
3530Xavier Lucas xavier.lucas@corp.ovh.com
3531Werner Beroux werner@beroux.com
3532Brian Stengaard brian@stengaard.eu
3533Jakub Gedeon jgedeon@sofi.com
3534Jim Tittsler jwt@onjapan.net
3535Michal Witkowski michal@improbable.io
3536Fabian Ruff fabian.ruff@sap.com
3537Leigh Klotz klotz@quixey.com
3538Romain Lapray lapray.romain@gmail.com
3539Justin R. Wilson jrw972@gmail.com
3540Antonio Messina antonio.s.messina@gmail.com
3541Stefan G. Weichinger office@oops.co.at
3542Per Cederberg cederberg@gmail.com
3543Radek �enfeld rush@logic.cz
3544Contact the rclone project
3545
3546The project website is at:
3547
3548https://github.com/ncw/rclone
3549There you can file bug reports, ask for help or contribute pull requests.
3550
3551See also
3552
3553Google+ page for general comments
3554Or email Nick Craig-Wood