Vero to Amazon S3

This page provides you with instructions on how to extract data from Vero and load it into Amazon S3. (If this manual process sounds onerous, check out Stitch, which can do all the heavy lifting for you in just a few clicks.)

What is Vero?

Vero is an event-driven email platform businesses can use to drive customer interaction campaigns.

What is S3?

Amazon S3 (Simple Storage Service) provides cloud-based object storage through a web service interface. You can use S3 to store and retrieve any amount of data, at any time, from anywhere on the web. S3 objects, which may be structured in any way, are stored in resources called buckets.

Getting data out of Vero

You can collect that data from Vero's servers using webhooks and user-defined HTTP callbacks. Set up the webhook in your Vero account and define a URL that your script listens to and from which it can collect data.

Sample Vero data

Once you've set up HTTP endpoints, Vero will begin sending data via the POST request method. You can access useful objects such as sent, delivered, opened, clicked, bounced, and unsubscribed. Data will be enclosed in the body of the request in JSON format. Here's a sample of what an inbound webhook with data from the Vero endpoint looks like.

{
        "sent_at":1435016238,
        "type":"sent",
        "user": {
            "id":123,
            "email":"steve@getvero.com"
        },
        "campaign": {
            "id":987,
            "type":"transactional",
            "name":"Order confirmation",
            "subject":"Your order is being processed!",
            "trigger-event":"purchased item",
            "permalink":"http://app.getvero.com/view/1/341d64944577ac1f70f560e37db54a25",
            "variation":"Variation A"
        }
    }

Loading data into Amazon S3

To upload files you must first create an S3 bucket. Once you have a bucket you can add an object to it. An object can be any kind of file: a text file, data file, photo, or anything else. You can optionally compress or encrypt the files before you load them.

Keeping Vero data up to date

At this point you've coded up a script or written a program to get the data you want and successfully moved it into your data warehouse. But how will you load new or updated data? It's not a good idea to replicate all of your data each time you have updated records. That process would be painfully slow and resource-intensive.

Instead, identify key fields that your script can use to bookmark its progression through the data and use to pick up where it left off as it looks for updated data. Auto-incrementing fields such as updated_at or created_at work best for this. When you've built in this functionality, you can set up your script as a cron job or continuous loop to get new data as it appears in Vero.

And remember, as with any code, once you write it, you have to maintain it. If Vero modifies its API, or the API sends a field with a datatype your code doesn't recognize, you may have to modify the script. If your users want slightly different information, you definitely will have to.

Other data warehouse options

S3 is great, but sometimes you want a more structured repository that can serve as a basis for BI reports and data analytics — in short, a data warehouse. Some folks choose to go with Amazon Redshift, Google BigQuery, PostgreSQL, Snowflake, Microsoft Azure SQL Data Warehouse, or Panoply, which are RDBMSes that use similar SQL syntax. If you're interested in seeing the relevant steps for loading data into one of these platforms, check out To Redshift, To BigQuery, To Postgres, To Snowflake, To Azure SQL Data Warehouse, and To Panoply.

Easier and faster alternatives

If all this sounds a bit overwhelming, don’t be alarmed. If you have all the skills necessary to go through this process, chances are building and maintaining a script like this isn’t a very high-leverage use of your time.

Thankfully, products like Stitch were built to move data from Vero to Amazon S3 automatically. With just a few clicks, Stitch starts extracting your Vero data via the API, structuring it in a way that's optimized for analysis, and inserting that data into your Amazon S3 data warehouse.