alt descriptionalt description
11/25/15
Andrew Zuercher
Securing S3 Assets in nodejs with Temporary URLs
Main Image

A lot of times, I'm asked to build applications that include assets that I really enjoy using Amazon's Simple Storage Service (S3) for. It is far less expensive to use, can be made available in different regions, and doesn't kill the performance of my application server. Tons of open-source utilities exist for it to interoperate with my models in RoR and nodejs, for example. The problem is that for vanilla type of deployments, being open is fine, but when the assets have Intellectual Property (IP) associated with them, this needs to be locked down a bit.

Traditional s3 implementation

So lets cover the traditional deployment that most people use in a system with s3. Typically a webserver such as nginx sits in front of your nodejs instance, nginx proxies to nodejs most likely using the proxy module, and hey it's great. In the nodejs instance you create those s3 items and store them in s3 with public read access. The nodejs instance renders or provides REST services that then render the public s3 url in the browser. The end user views these links directly.

The downside of this model is that any logged-in user can copy the URL from the address bar, send it to someone else, and even if they are not logged into the application, that other person can view the s3 asset. This is clearly bad.

Surrender s3 implementation

In this scenario, instead of storing the assets in s3 publicly, we store them privately. We write a service in nodejs that checks if the request is a secure session, and if it is, then we read the entire asset from s3 and then stream it to the user. In this scenario, the link to the asset is a route through our app server. The end user never knows that s3 is used. If the user shares the link with another user, they will be redirected to login since the service performs this check.

This falls down because now we've traded off pretty much all the benefits that s3 provides. We might as well wave a white flag over the top of our solution.

Expiring URLs s3 implementation

In this scenario we don't directly provide the assets when we render the page, instead we provide a redirect link to the asset. When the user clicks the redirect link, a temporary URL to the s3 asset is returned. This is nice because it doesn't require us to provide the expring url when we render, instead you can wait unil the user clicks the link to provide it. This is very applicable in the case where the link is something that is downloaded or viewed outside of the web application, such as a PDF or video, or some other binary document. If the user shares the redirect link, they will be redirected to a login. Also, if the user shares the target temporary link, they will see a s3 access denied since it expires.

This is awesome in that we are able to leverage all the good stuff to do with s3 and is relatively secure for the short window the expring url is available.

Keystone Implementation

First thing we need to do is create a little wrapper for our temproary url generation. I created the following under /lib/s3-utiljs. You'll notice that the method is synchronous, which is awesome because it can be performed inline and doesn't require a remote call. Note that I've stored a new S3_URL_TIMEOUT in minutes in my .env for allowing a global expiration.
var sig= require('amazon-s3-url-signer'); var bucket1 = sig.urlSigner(process.env.S3_KEY, process.env.S3_SECRET); module.exports = { temporaryUrl:function(s3attribute, timeout){ timeout = (timeout) ? timeout : process.env.S3_URL_TIMEOUT; var filename = (s3attribute.filename) ? s3attribute.filename : s3attribute; return bucket1.getUrl('GET', filename, process.env.S3_BUCKET, parseInt(timeout)); } }

Don't forget to install the amazon-s3-url-signer package:
npm install --save amazon-s3-url-signer

Now we create our redirect service. I created the following file under /routes/service/assets.js
var s3util = require("../../lib/s3-util") module.exports = { getS3Redirect:function(req, res, next){ if (req.query.path) { var url = s3util.temporaryUrl(req.query.path); res.status(302).redirect(url) } } }

Lets add the route in /routes/index.js:
var routes = { views: importRoutes('./views'), services: importRoutes('./services') }; ... exports = module.exports = function(app) { ... app.get( '/assets', middleware.requireUser, routes.services.assets.getS3Redirect ) };

Cool. Now lets add a handlebars helper by modifying
/templates/views/helpers/index.js var s3util = require("../../../lib/s3-util") module.exports = function() { ... _helpers.assetUrl = function(s3Attribute, options) { return ('/assets?path=' + s3Attribute.path+ s3Attribute.filename); }; }

Lets modify the s3 field in our model to include the necessary headers "x-amz-acl" so that the s3 asset is stored as private:
Model.add({ item: { type: Types.S3File, headers: [{ name: 'x-amz-acl', value: 'private' }], s3path: "library-item/items" } })

And finally, now we can simply provide the following in our rendered template:
<div class="container"> <h1>Items</h1> </div> <table class="table table-striped"> <tr> <th>item</th> </tr> {{#each libraryItems}} <tr> <td><a href="{{assetUrl item}}">{{name}}</a></td> </tr> {{/each}} </table> 

Summary

All in all this approach incorporates a cool feature of s3 that can be very easily applied to web applications. Once you have it integrated, it's very clean to span across multiple models in your application, all while preserving the architectural benefits of using s3 in the first place.
If you are interested in learning more or want assitance with creating a web app/CMS, please contact us at http://barrelproofapps.com or send an email to info@barrelproofapps.com.
BPA Logo
Get in Touch
+1 770.744.4027info@barrelproofapps.com2451 Cumberland Parkway SESuite 3949Atlanta, GA 30339