Use FZF to Show the Contents of an AWS S3 Object
Once you discover fzf, you want to use it everywhere. It’s clean, powerful, and versatile. In this post, I show you how to use fzf to display the contents of a file in S3.
Dependencies
- fzf
- AWS Command Line Interface (CLI)
- An AWS account
- One or more S3 buckets with one or more files in them
The Function
We’re calling our shell function cs3
, short for cat s3
. In it, we will:
- Display a listing of all S3 buckets and allow selection
- Display a listing of all files (objects) in the selected bucket and allow selection
- Copy the selected file to a temporary file on the local machine
- Display the file using the
cat
command - Delete the temporary file
The finished function looks like this:
function cs3() {
local bucket file tmpfile
bucket=$(aws s3 ls | tr -s ' ' | cut -d' ' -f 3- | fzf)
[ -n "$bucket" ] && file=$(aws s3 ls "$bucket" --recursive | tr -s ' ' | cut -d' ' -f 4- | fzf)
if [ -n "$file" ]; then
tmpfile=$(mktemp)
aws s3 cp "s3://$bucket/$file" "$tmpfile" && cat "$tmpfile"
rm "$tmpfile"
fi
}
I’ve tested it on zsh
only, so let me know if you have issues on other shells.
The Breakdown
We use the aws s3
CLI command to list buckets, list files (objects), and copy the file to the local machine. Make sure you have the AWS CLI installed and configured.
Select a Bucket
We start by listing the S3 buckets:
aws s3 ls
That listing has this format:
2020-07-27 11:18:03 this.is.my.bucket.1
2020-07-27 11:18:18 this.is.my.bucket.2
For our selection, we want just the bucket name. We can parse it either before or after passing the listing to fzf. I choose to trim the list to just the bucket name before passing to fzf, so the listing isn’t cluttered with date and time.
To filter the dates and times out of the list, we first pipe to the translate command (tr
), telling it to squeeze repeats (-s
) for space characters (' '
). This turns sequential spaces into a single space, so that when we split on spaces we can do so predictably. Frankly, this isn’t necessary for the buckets, since the date, time, and bucket name fields are separated by a single space. We’ll need this for the object listings, though. Or if AWS ever changes this format.
We’re not done filtering the dates and times at this point — we’ve just normalized the spaces between fields. To actually filter, we use the cut
command, indicating that we want everything from the third field on (the trailing -
after the 3
means all subsequent fields). Another case of overkill, as the bucket name can’t have a space, but it’s cheap to add and future-proofs us if AWS starts allowing spaces in bucket names.
At this point, our listing looks like this:
this.is.my.bucket.1
this.is.my.bucket.2
We pipe it to fzf and let it run. The results of the selection — either the bucket name that was selected or blank, if they hit Ctrl+C, are put in the $bucket
variable.
Select a File
After checking to make sure we have a bucket name ([ -n "$bucket" ]
), we get a recursive listing from the selected bucket. Note that we don’t pass anything for the --page-size
parameter. The default is 1000, which is also the maximum allowed, so no reason to change the setting. If you have more than 1000 items in your bucket, you’re out of luck. The file (object) listing looks something like this:
2020-04-14 17:43:32 3724 webpack-runtime-6c5bc4cffdf38068cc81.js
2020-04-14 17:43:32 16704 webpack-runtime-6c5bc4cffdf38068cc81.js.map
2020-06-22 20:09:31 2575 webpack-runtime-6c6d36efeefeade6a97a.js
2020-06-22 20:09:32 12821 webpack-runtime-6c6d36efeefeade6a97a.js.map
Note the variable number of spaces between the time and the file size — piping through tr -s ' '
squeezes those spaces into single spaces, so that our file name can reliably be found in the the fourth field. Again, we pipe that output to the cut
command, but this time the trailing dash after the field number (4
) does matter. S3 object names can have spaces in them.
Note that we have a bug: object names with consecutive spaces will show as single spaces, so after selection we’ll have the wrong name. The simplest solution, and the one I’ve chosen, is to never put consecutive spaces in S3 object names. Feel free to find a different solution.
Finally, we pipe our list, which now looks like this:
webpack-runtime-6c5bc4cffdf38068cc81.js
webpack-runtime-6c5bc4cffdf38068cc81.js.map
webpack-runtime-6c6d36efeefeade6a97a.js
webpack-runtime-6c6d36efeefeade6a97a.js.map
to fzf.
Display the File
After checking that the user selected a file, we use mktemp
to create a temporary file, and we store its name in the variable tmpfile
. We use aws s3 cp
to copy the selected file from the selected bucket to our temporary file, and then we use cat
to display the contents of the file. Finally, we delete the temporary file.
Closing Thoughts
If you aren’t already using bat, and aliasing it to cat
, go do that and thank me later.
If you want to preserve the file locally, rather than display and delete, just redirect it to whatever file you want:
cs3 > my_file.txt
Now it’s your turn. Here are some more fzf ideas. Go make your life easier by using fzf, and share what you learn!