Moving Ad Data From Facebook to Google Analytics

| Comments

Background

In the world of ads, apparently it’s good to not just run ads but also see how effective they are. Who woulda thought? Google’s ad platform is obviously a big one but they’re certainly not the only one. In this case, we’re getting data from Facebook’s ad system, putting it into a CSV, and then uploading it to Google Analytics (GA) so we can run queries against it (or something. See disclaimer).

I started here and it really wasn’t the most helpful page. Hopefully this walkthrough alleviates some of the pains I had.

To get started, you’ll want to know what kind of data you want to import. You must have ga:medium (ex: “organic”, “email”, etc.) and ga:source (ex. “facebook”)

Aside from those two, you have to have at least one of the following:

  • ga:adClicks
  • ga:adCost
  • ga:impressions

Otherwise, what are you even tracking?

So think about your data in these terms, you can export it now from your data source, or you can do it later. The world is yours.

API access

You’ll need an account that can access the GA API. Google has approximately a billion different APIs so hit up their Developer Console. Create a new project then select it to create users within it. I called my project “Analytics”. You can get similarly creative if you want. We’re going to hang out in the APIs & auth section. Go to the item called Analytics API and turn it on. Easy.

Since we’re working on the server, we’ll want to create a new Client ID and make sure it’s a service application. Once you create that, a .p12 file will download to your machine. Don’t close this tab.

Google gives you a .p12 file but when authenticating, you need a .pem file. If you’re on a Mac, you can use the handy openssl tool on the command line to create a .pem. $ cd into your Downloads directory and run this:

openssl pkcs12 -in downloaded-file.p12 -out key.pem -nodes

I moved the pem to the project directory just so it’d be easier to access later.

Take a quick look at the “Permissions” section and make sure that this new user you created has the “Can Edit” permissions.

setup GA

Firstly, and this really frustrated me for a few hours, use email that you got from creating a Client ID and add it as a user the the GA profile you’re going to be uploading data to.

Let’s go ahead and get GA setup. Login to your GA profile an use the Admin tab on top. Pick the property you want to upload data to then use the Data Import section to create a new data set for Cost Data. Give it a name, select your views, and then setup your schema. Grab the Custom Data Source ID because you’ll need it later. Save it and let’s move on.

javascript

Google has a handy Node lib that makes connecting to their APIs a bit easier. I’m going to cross my fingers that you understand a bit of nodejs so just get the file started with your requires

1
2
var googleapis = require('googleapis');
var fs = require('fs');

Google uses a JSON Web Token to connect to their APIs. You don’t have to make it yourself, exactly, you just have to pass parameters to it. Creating the auth client is kind of easy assuming you blindly follow examples and don’t try to read any docs. Here’s what I ended up with and it worked:

1
2
3
4
5
6
7
// auth details
var email = 'EMAIL_ADDRESS_HERE'; // (str) the email address provided when you create an API user
var keyPath = './key.pem';        // (str) check out step 6 here: https://github.com/extrabacon/google-oauth-jwt#creating-a-service-account-using-the-google-developers-console
var key = '';                     // (str) not sure, apparently you don't need it if you pass a working .pem keyfile

// Construct the JWT client
var authClient = new googleapis.auth.JWT(email, keyPath, key, ['https://www.googleapis.com/auth/analytics']);

The googleapi module make the whole process fairly easy, my biggest difficulty was finding what I had to pass to the upload function before it would actually upload.

The docs for dailyUploads are fairly helpful but didn’t do the best job describing what options were. Luckily, I figured them so you don’t have to.

1
2
3
4
5
6
7
8
9
var opts = {
    accountId: 'ACCOUNT_ID',              // (str) in 'UA-5345434-1', the accountId is 5345434
    appendNumber: 1,                      // (int) 1-20. if you're just adding one CSV, this will be 1, if you append another, increment this
    customDataSourceId: 'DATA_SOURCE_ID', // (str) the customDataSourceID you got when you created the client ID
    date: 'YYYY-MM-DD',                   // (str) the date that the data represents. 
    uploadType: 'media',                  // (str) probably just stick with media if you're uploading a simple CSV
    type: 'cost',                         // (str) sweet option, Google. The only acceptable value here is 'cost' so leave as-is
    webPropertyId: 'WEB_PROPERTY_ID'      // (str) this is the GA id for the profile. Something like 'UA-5345434-1'
};

Real quick, we want to get a string representation of the CSV so go ahead and get that using node’s fs module.

1
2
var pathToCSV = './data/data.csv';
var csvData = fs.readFileSync(pathToCSV, "utf8");

So now we can create the client, kick some options to it, authorize and upload our data. Make sure this is wrapped in an authClient.authorize() so we’re producing an access token.

1
2
3
4
5
6
7
8
9
10
11
12
client
    .analytics.management.dailyUploads.upload( opts )
    .withMedia( 'application/octet-stream', csvData )
    .withAuthClient( authClient )
    .execute(function( err, results ){
        if ( err ) {
            console.log( err );
            return;
        } else {
            console.log( results );
        }
    });

If you’ve done everything right, you should be able to run $ node app.js and get a response that contains a nextAppendLink.

All together now!

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
var googleapis = require('googleapis');
var fs = require('fs');

// auth details
var email = 'EMAIL_ADDRESS_HERE'; // (str) the email address provided when you create an API user
var keyPath = './key.pem';        // (str) check out step 6 here: https://github.com/extrabacon/google-oauth-jwt#creating-a-service-account-using-the-google-developers-console
var key = '';                     // (str) not sure, apparently not needed if you pass a working .pem keyfile

// Construct the JWT client
var authClient = new googleapis.auth.JWT(email, keyPath, key, ['https://www.googleapis.com/auth/analytics']);

// Authorize it to produce an access token
authClient.authorize(function(err, tokens) {
    if (err) {
        console.log( err );
    }
    googleapis
        .discover('analytics', 'v3')
        .execute(function(err, client) {

            // data
            var pathToCSV = './data/data.csv';
            var csvData = fs.readFileSync(pathToCSV, "utf8");

            // GA Options
            var accountId = 'ACCOUNT_ID_HERE';            // (str) in 'UA-5345434-1', the accountId is 5345434
            var formattedDate = 'YYYY-MM-DD'              // (str) date for data
            var customSourceId = 'CUSTOM_DATA_SOURCE_ID'; // (str) in GA admin control panel, get this from data import=>manage a data set
            var webId = 'WEB_PROFILE_ID';                 // (str) id from google analytics for the profile

            var opts = {
                accountId: accountId,
                appendNumber: 1,
                customDataSourceId: customSourceId,
                date: formattedDate,
                reset: true,
                uploadType: 'media',
                type: 'cost',
                webPropertyId: webId
            };

            client
                .analytics.management.dailyUploads.upload( opts )
                .withMedia( 'application/octet-stream', csvData )
                .withAuthClient( authClient )
                .execute(function( err, results ){
                    if ( err ) {
                        console.log( err );
                        return;
                    } else {
                        console.log( results );
                    }
                });
        });
});

SO there you go, you can upload cost data to GA. Now you can query it just like anything else in GA.

Redesign

| Comments

I spent a bit of time working on a redesign. My starting point was certainly functional but it was the default for Octopress and I wanted something different. I wasn’t happy with it but as I said in my first post, I just needed to get something that I could start blogging on.

Here I am about a year and a half later and while I haven’t made a ton of progress on blogging—this will be only my eighth published post—I have used this blog. It has been a fun place to tinker and most recently, toy with design.

In the spirit of the great king, I’ve removed almost everything from this blog except comments. Even those may disappear at some point. I’m tired of slow, I’m tired of bulk, I just want content and I want it now.

Freelancing

| Comments

I’ve been taking on a bit more freelancing work lately. It’s been really fun (and really not-so-fun, Wordpress database migration, anyone?) working with a friend/really talented designer. I’m learning tons along the way and not just on the technical side of things but also in the world of business and relationships.

I’m trying to learn as much as I can about the freelancing. I’m sacrificing other parts of life to do work outside of work but I think it’s time well spent.

I was checking my usual news sites and stumbled across a really insightful post from Alan Hollis, “My first year freelancing”. It has some great tips and gave me a lot to think about.

Just as important as the article though is the discussion following it. I don’t know how to digest it all right now, but I like what I’m reading. I’ll close this post with the following from user damoncali:

You need to get out of the mindset of charging for your time. You are charging for the value you create – and time is just a crude approximation of that at best.

Using Grunt-contrib-livereload With Yeoman’s Grunt-regarde

| Comments

Note: Since I wrote this, I found out this method has been deprecated. You can now spawn a livereload server from grunt-contrib-watch. Details are available at https://github.com/gruntjs/grunt-contrib-watch#optionslivereload.


For starters, you’ll want to make sure you’ve udpated your package.json with the right dependencies. I’m not sure that livereload works with the baked in “watch” task and I’ve been using grunt-regarde of late. My package.json usually looks like this:

create your dependencies in package.json
1
2
3
4
5
6
"dependencies": {
  "grunt": "~0.4.x",
  "grunt-contrib-livereload": "0.1.2",
  "grunt-contrib-connect": "0.2.0",
  "grunt-regarde": "0.1.1"
},

You obviously want grunt, livereload, connect seems to help with mounting folders, and regarde is like grunt-watch, it just seems to work better (I forget why exactly).

You could make your package.json even better by specifying livereload in its own “devDependencies” object if you’re so inclined. Now, run your good old fasioned npm install to get the goodies in your project.

Let’s talk gruntfiles:

As you probably know, the gruntfile is what makes the magic happen. Somewhere towards the bottom of your gruntfile, you’ll want to specify

load tasks in your gruntfile
1
2
3
grunt.loadNpmTasks('grunt-regarde');
grunt.loadNpmTasks('grunt-contrib-livereload');
grunt.loadNpmTasks('grunt-contrib-connect');

At the top of your gruntfile, we’ll want to add some utils for livereload. Under /*global module:false*/, go ahead and add var lrSnippet = require('grunt-contrib-livereload/lib/utils').livereloadSnippet;.

After that, you don’t really need to learn connect, you just gotta use it. Check it out:

1
2
3
var folderMount = function folderMount(connect, point) {
  return connect.static(path.resolve(point));
};

This comes before module.exports = function(grunt) {

Now let’s get into the meat of the gruntfile. I’m not entirely sure what connect is doing but this is where the middleware magic comes into play. In your modules.exports, add:

1
2
3
4
5
6
7
8
9
10
connect: {
  livereload: {
    options: {
      port: 9999,
      middleware: function(connect, options) {
        return [lrSnippet, folderMount(connect, '.')]
      }
    }
  }
},

Now we want to watch for changes on the files. I like to set up a few different tasks since I don’t want my whole grunt process running every time I save a CSS file. Here’s what I work with (again, add to module.exports):

setup tasks to run for various file changes
1
2
3
4
5
6
7
8
9
10
11
12
13
14
regarde: {
  txt: {
    files: ['styles/*.css', 'index.html'],
    tasks: ['livereload']
  },
  styles: {
    files: ['sass/*.scss', 'sass/*/*.scss'],
    tasks: ['compass']
  },
  templates: {
    files: ['templates/*.jade'],
    tasks: ['jade']
  }
},

You can see that I only want livereload to fire when there are changes to my compiled css (*.css) or to my compiled html. If I edit a SCSS file, I want to fire off just compass. If I edit a jade template, I want to only fire the jade to HTML compiler. I think you can see what’s going on. You can toy with this, just be smart about it because you could get caught in an infinite loop.

Lastly, you need to fire off these processes. I like tying them all to my main grunt task because my gruntfile is just that sweet.

1
2
// Default task.
grunt.registerTask('default', ['livereload-start', 'connect', 'regarde']);

Now, when you fire up grunt in the CLI, you should (hopefully, maybe, cross your fingers) get something like this:

Running "connect:livereload" (connect) task
Starting connect web server on localhost:9999.

Browse to http://localhost:9999/yourpage.html and watch magic happen.

full gruntfile here. full package.json here.

Shows

| Comments

Sometime this year I started a Gmail label called “tickets” so I could better track the shows I go to. I’ve gone to a lot. There are eighteen emails with this label and I imagine I’ve gone to at least a handful more of impromptu shows. So this comes to somewhere around two dozen shows I’ve gone to, many with openers and co-headliners. Even without having gone to a festival, this year I’ll probably have seen upwards of fifty different bands, artists, and performers.

So where’s this leave me? Probably with a bit of hearing damage but also with several unforgettable (and some very forgettable) experiences. I’ve been willing to see just about any kind of show this year ranging from hardcore shows to dancey DJ sets to Brooklyn-based folk music in a room of about ten people. I’ve seen the greatest concert of my life and walked out of a show after three minutes.

These performances have been across the board. I saw The Ghost Inside, Thrice and Refused and found myself in the pit each time. These shows aren’t about the musical experience so much as they are about being fully present and letting the music, the crowd, and the venue take their toll on your body.

One of the more memorable shows this year was when I saw Snow Patrol at the 9:30 club. I went into the show almost entirely unaware of their catalog. I went because it was live music (and I had a crush). I didn’t expect much since all I knew of them was their song “Open Your Eyes.” Going with no expectations allowed me to focus entirely on the concert experience. I didn’t have any songs I wanted to hear, I didn’t know how well they performed in another venue, I just knew I got the treat of seeing live music. They commanded the stage at the 9:30 club with their energy and musicianship in ways I haven’t seen from other bands.

These shows have taken me all over Washington DC and connected me with people who I probably wouldn’t have connected with otherwise. I’ve seen beautiful concert venues and I’ve stood in a room of five people while a guy and girl sing about being in love with each other. In the coming weeks I’ll be seeing a band from my middle school years, a hip-hop producer, and a South African dreamy electronic synth-pop band. I have no need to take every concert seriously, shows should be fun. At the same time, I find something magical in watching people pour out their heart and souls through music. I hope you get to experience that magic soon.

My upcoming shows

After the Fact .gitignore

| Comments

Lately, I’ve been working on a project with a few other developers and, as you can imagine, github has been our go-to for version control and management of changes. We’re each working locally but the time came where we needed to give the client access to the CMS (ExpressionEngine) and allow her to start entering content. Ok, easy right? Well, kind of.

For local dev, I’m still using MAMP just for the sake of simplicity so all my paths are relative to /Applications/MAMP/htdocs. This is all well and good but my MediaTemple shared GS doesn’t have MAMP. I had to get at MySQL in a different location.

During development, we’ve been running a few scripts to manage the transporation of EE’s database and the paths were working based on the path to MAMP. It wasn’t the cleanest way of accomplishing things but it worked for what we needed. As soon as we weren’t using MAMP, we needed to change paths. Since the server and our local environments were pulling from the same repo, we had a few options; we could either use environment variables (probably a good idea) or each person and the server could maintain an independent database configuration (not quite as good). Since my knowledge of the command line is a bit limited and MediaTemple has some restrictions in place on their shared servers, we went with the second solution. We now needed to maintain independent copies of our configuration files with different settings based on the environment but git was already tracking the files and pushing/pulling them.

As usual, stackoverflow came to the rescue. To untrack files that have already been initialized but NOT delete it from the system, use the following: git rm —cached filename

Then, go ahead and commit your project changes. You should see the file drop out of the git repo but stay on your system. This proved to be a little problematic because when another developer pulled, he lost the file entirely and had to recreate it. This wasn’t a huge deal because it was in the history but this should somehow be avoided, I’m just not sure how. Any tips?

My Grandfather

| Comments

All of my grandparents are dead. In fact, they’re long dead. I lost my last grandparent (my mom’s mom) when I was in elementary school. Prior to her death, she suffered from Multiple Sclerosis and lung cancer. Smoking will do that to you, I suppose.

This said, my family is unique in that it was tragically affected by death on a biannual basis. Cancer, MS and heart attacks took my grandparents from us. I’m not sure how this shaped me other than I didn’t have trips to see my grandparents like so many of my friends had and dreaded (wanna trade?) through middle and high school years.

I watched The Graduate a few days ago and was impressed by the soundtrack that features Simon & Garfunkel almost exclusively. I have faint memories of each of my grandparents and while nothing especially stands out about my dad’s father, I remember being in his house. After my grandmother, his wife, died (I was around five years old) he kind of closed up. From what I know and have heard, he wasn’t an exceptionally welcoming person but the loss of my grandmother was especially hard on him. He was successful in business so he had a lot of money but he seemed to be pretty lonely. He bought an RV, he had a boat, I remember flying with him in his plane; but I don’t know how happy he really was.

One very specific memory I have is spending time in his house after he died. I think my dad and his siblings were trying to move things out and I happened to be around. I was in the sunroom fiddling with a boom box (probably the first one I had ever seen) and it had a CD player. I remember some of the CDs that were in the CD box very well (probably because my mom took them away from me). The ones I remember specifically were:

  • The Top Gun Official Sound Track
  • Boyz II Men – II
  • Yanni Live At the Acropolis
  • Simon & Garfunkel’s Greatest Hits

There were probably more that I don’t remember but those are the ones that stood out so well. Up to that point, the music I had access to was what my parents played in the car or what I heard at church (often the same thing). When I played Boyz II Men, I had no idea what these smooth, harmonious sounds were on Thank You or what the hard snares were on Playing with the Boys. The album that stuck out the most though was Simon & Garfunkel’s greatest hits.

I remember the sound and scratch of the cd player queueing up and reading 00 on the track listing. Suddenly as the laser in the CD player found it’s mark on the disc, the number would jump to 01 and the most unbelievable sound of acoustic guitar came through. This was followed by men singing nonsense words “dee dee dee” and “doo doo do do do doo” then came a bold chorus talking about Mrs. Robinson, Jesus, and Heaven. The lyrics were crisp and clear. I knew I had stumbled on something great, but I had no idea why or what it was.

I held onto that CD and listened to it over and over. Every time I played it in that same boombox. Even though it was a greatest hits compilation, the songs seemed like they were written to flow in and out of each other.

So as I was watching The Graduate and singing along with every word, I couldn’t help but find myself sitting in that sunroom of my grandfather’s house. It’s my first memory of music having a meaning outside of what the lyrics said. It became something for the soul, something that conveyed the feelings of not only the artists but the feelings of a generation. I was undoubtedly too young to understand all this at the time but I remember the feeling of wonder.

It’s the same feeling I get as I sit here listening to America. I can’t say I know what it means but it feels careless yet hopeful.

I’ll take it.

Play Listen Repeat Vol 31

| Comments

not mine. from pandora

[Music] exists as and for appearance. There is no actuality underlying it… Musical coherence is abstracted from actuality, not based upon it… [Music’s] appearance and its actuality are one and the same.

Geoffrey Payzant, Glenn Gould: Music and Mind

Clearly, it’s possible to create believable, effective, amazing recorded works independent of the quality of the music on which the recording is based. Even when a recording’s musical content is lackluster or unremarkable, vibrant elements (a great vocal performance, a hook, or clever stylistic choices, etc) can work to make the recording itself into a potent creation, such that the so-called “deeper” content doesn’t matter.

This seems obvious enough. Is anyone really going to be troubled by the suggestion that the sounds within a recording are of comparable importance to the so-called content (i.e.,the musical ideas, words, melody) that we ordinarily perceive to underlie it? I doubt it.

Now, some critics might say that this sort of music makes silk purses from sows’ ears, and common sense would probably agree. Anyone who has ever noticed a vapid lyric or a tired chord progression underpinning a beloved popular song has had that view, if only for a minute. Silk purses from sows’ ears.

But when we look more closely at a recording, we get into some trouble, because (to state the obvious) the true contents of a recording consist only of the actual recorded sounds themselves, and nothing more. The recorded sounds are not just of comparable importance, they are all there is.

Music is, as Payzant says, “entirely phenomenal… [it] actually appears, and its appearance is the kind of actuality is has.” In other words, that “coherence” that we recognize as a song is something that we abstract from the actual music.

This is both obviously true, and also more than a little unsettling, since most of what I hear when I listen to music, and most of what I am seeking when I listen, has to do with the sense of a song that is behind the one I can hear. I am hearing content that is implied by the recordings contents; or, to use Payzant’s terminology, I am interested in what I can abstract from the sounds I am hearing.

It is usually the excellence of those perceived deeper implications, and the quality of the communication transmitted through the music from another human soul who somehow found and adapted their experience into art, that matters to me, much more, apparently, than what I am actually hearing.

[originally] Posted by Michael Zapruder at February 19, 2008 11:59 AM at blog.pandora.com

Begin

| Comments

off we go

I’ve wanted to start a blog for a while. I’m not sure how far I’ll go with it or what will become of it. I’ve gone through iteration after iteration in several content management systems and blogging platforms (Expression Engine, Wordpress, Jekyll) and even tried to write my own with PHP/MySQL.

I’ve had some successes along the way and learned a good bit but I found myself stuck either on design or I’d be frustrated with something technical. Whether it was my date sorting mechanism failing in my homegrown PHP-based CMS or I was unsatisfied with the visual design of what I came up with1, I could never seem to get something off the ground.

It has literally been years in the making and finally after doing a bit of reading I realized I need to just do this thing. I realized that my end-goal was blogging, not really having a home-grown solution. I wanted to learn in the process and I certainly had. It’s that I don’t want to build a content management system. I managed to dive in with a bit of PHP, Node, and server configuration along the way. I’d say it’s a successful lesson learned. So with that in mind, I started on Jekyll then quickly moved to Octopress at the recommendation of Andrew Theroux (@theroux).

I’ve used Ruby Gems in the past but only to the extent of using $ gem install foo. Not exactly groundbreaking stuff. Octopress was similar but it had a few extra steps that I needed to dive into. I had to upgrade Ruby since my MacBook had 1.8.3 and Octopress required 1.9.2. Luckily Octopress is very clearly documented and before long, I was headed in the direction of RVM. This made upgrading a breeze and made me thankful that there are a lot of people in the web community who know a lot more than I do.

I’m hosting this badboy on Heroku and using github as a backup. In the future, I hope to write about what I’m learning in my new-ish job at nclud. I also want to customize the look and feel of this thing since right now it’s just the default skin, and I can hopefully dive into the source of this thing a bit to customize it to my needs.

We’ll see where this goes!

1. I like designing in code but I have high standards and my ability for visual design is lacking. I’m actively working on it and always trying to push myself to not settle.