Development

Test Content Generator

in Development

Testing is one of my favorite parts of building a website. The in-depth, detail work involved in quality testing is something that I’m really drawn to and enjoy building tools for. When testing, content is an integral part of making sure that there are no visual bugs or ticks on the website – however, spinning up a lot of content takes time. A lot of time.

With this in mind I’ve built a tool to build content & be able to similarly remove it with the click of a button. It’s simply called Test Content Suite and it’s up for free on my Github account.

The concept is simple – you choose how many posts to spin up (or keep it random), click the “Create” button, and the plugin will create those posts, assign them to appropriate taxonomies, pull in featured images, and even create metadata if available.

screenshot1

You can then delete the posts you created (and only those test posts – it won’t go wild on you) by hitting the “Delete” button. Simplicity is the key here – I wanted to make the process easier and quicker and this does the trick.

The tool is currently intended as a development tool and is thus only available on Github for the time being. When it’s built out to a place to be more applicable to a general audience I might push it onto the .org repo.

I hope you enjoy this small, free tool!

https://github.com/mikeselander/dummybot

A More Perfect WordPress

in Development

Ryan McCue pushed up a great monologue/question today on what would make WordPress more enjoyable to work with from a developer perspective. I’ve had several thoughts on this topic and figured the best way to share them is with a short blog post expanding on each of them.

Consistency in function naming

Function naming has evolved over the last 10 or so years, but there’s been no decided consistency which makes it particularly difficult to learn how to use which function when. I can use `get_the_title()` and `get_permalink()` out of the loop, but not `get_the_content()`? And on those functions, we have `get_the_title`, `get_the_content`, but not `get_the_permalink`? We expect people to memorize all these differences instead of fixing the core issue.

Clean up the spaghetti code

There needs to be a concerted effort to refactor a significant portion of the code in core. It’s a disorganized mess and finding anything in core can take a lot of time. It’s common for me to track down 4-5 files to find the source and code of a single set of functionality. This is a HUGE task and would take a year or more in an ideal world – the easiest way to approach it would be to rate the worst-offending files and address those on down the list. Eventually the code would be dramatically more logical.

Full separation of functional & display code

This goes along with the point above but deserves its own section. WordPress is FULL of admin files that have functions mixed in with display code which makes it a damn sight impossible to break down the code and make intelligent

REST API in core

The REST API landing in core and getting fleshed out will be a huge boon to making WP more developer friendly. It breaks you out of the shackles of using the WordPress templating system, which can be very frustrating to learn the first few times through. When a JS dev can jump in and build a WP frontend quickly, WP won’t be dreaded anymore.

Performance: A Case Study

in Development

I’m going to start writing sister posts to talks that I give to make it easier to reference the information. This post is attached to a talk I gave last year at the Fort Collins WordPress Meetup. You can find the presentation slides here.

Why bother with performance?

Performance is a crucial piece of development and content marketing. Having a site that runs quickly and doesn’t frustrate your visitors will make you money and increase conversions significantly.

On average, each 1-second delay in loading time means:

  • 11% fewer page vies
  • 16% decrease in custom satisfaction
  • 7% loss in conversions

If you have a site that loads in 20 seconds, you’re losing a huge amount of your traffic and potentially advertising dollars simply because your site is inconveniently slow. This means a loss of ROI on your site. On the flip side, your ROI increases dramatically from speeding your site up.

  • Amazon increased revenues by 1% for every 100ms of improved loading time
  • Yahoo increased traffic by 9% for every 400ms of improvement
  • Mozilla dropped page load time by 2.2 second and saw an estimated 60 MILLION more downloads per year as a result

General speed concepts

There are few concepts that are crucial to understanding front-end performance.

Smaller sized objects

You have to load resources on a web page to display anything. Images, videos, CSS, JS all have to be externally loaded onto the page for a user to have a good experience. However, if we can decrease the size of each resource that we load, the user will have a much better experience.

Less HTTP Requests

On the same line, if we can reduce the number of requests, the user will see more faster. At the time of writing this, HTTP/1.1 is still the de-facto technology. HTTP/1.1 can only download so many objects at once, let’s say 5 at a time. So, if you have 60 resources to download on a page the browser will download 5, and then the next 5, and then the next 5 and on and on until we’re done. Each chunk takes a while and results in a slower site because it has to wait for the previous ones.

Optimized Rendering Path

Along the same lines as the other 2 points, there is a preferred rendering path in the browser. Javascript is a render-blocking resource. If you load your Javascript at the very top of the page, the browser will stop downloading the other resources, download the js, parse the js, and run the js before attempting any other resources. This slows down the entire page’s rendering time significantly.

The Site

screenshot

The site we’re talking about today is Cheba Hut. This client came to us at Old Town Media wanting changes to their site but we quickly discovered that it needed a complete rebuild.

Before:

  • Static HTML site leveraging Cloudflare
  • Average of 23 seconds to load!
  • ~ 15 mb page weight
  • 170 resources loaded per page

This resulted in:

  • 2,549 hours a year of waiting
  • 4.837 TB a year in bandwidth
  • $368,640 a year in end-user bandwidth delivery costs

We rebuilt the site in WordPress and ended up with a significantly faster product.

  • .75 second average page load time
  • 2.4mb page weight
  • 61 resources per page load

Diagnosing the Problem

The Overview – Pingdom Tools

Pingdom Tools gives you a great 10,000-foot overview of the issues on a site. You can quickly work down the waterfall chart and see what’s taking the longest to load and get some basic stats on the the page you’re testing.

Practical Tips – Google Pagespeed Insights

Diving in a little deeper we have Google’s Pagespeed Insights tool. Pagespeed tests and instead of giving you data about load time, it gives solid recommendations on how to fix any identified problems and even optimizes your resources and packages them up for you.

In Depth Analytics – Webpage Speed Test

Webpage Speed Test is the powerhouse of all these tools. It takes quite a while to process, but gives more data and analysis than any of the others. You can run multiple tests at once from multiple locations and even run some powerful scripts such as a hosts re-direct.

The Build

The first step when optimizing a site is to get rid of every resource that you possibly can. Unneeded images, JS, and CSS are the the bane of an optimizer. By removing these unnecessary assets you can free up the waterfall for the important assets.

This case was no different, we were able to remove almost 2600 lines of HTML, 8 JavaScript files, and 5 CSS files.

Refactoring

The assets that we did have to keep, we re-wrote and refactored. The below JS snippet is a re-written version of almost 800 lines of code to control a single function. This brings down the load time and rendering time of the JS file significantly and kept the JS from blocking the rest of the assets.

new-js

Async/Defer

Even more, you can defer or async the JavaScript that is loaded. Async loading allows the asset to load inline with the other resources – it essentially becomes the same as images and CSS in the waterfall. Defer defers the loading of the entire asset until the rest of the resources have been handled. This is a powerful tool when dealing with JS because you can completely delay the rendering delay.

Mobile-first CSS

There are several approaches to responsive CSS but the newest and most effective from a performance perspective is mobile-first. Mobile first involves taking the basic styles that apply to both mobile designs and desktop designs and and placing them in you base stylesheet and then adding features via min-width media queries. This is progressive enhancement.

The theory behind this is that tablets and desktops are usually on stable, Wifi Internet connections whereas your mobile views are most likely to be on 3G or 4G networks and would take longer to load. By placing smaller assets in your base stylesheet and only loading the larger versions on a desktop view, you save your user’s bandwidth and loading time.

SVGs vs. Images

SVGs are code that represents paths and basic shapes. They can be used to replace large and unwieldy images in a site and prevent the loading of a huge image. In this case, I replaced a large and highly styled map of the US and franchise locations with a SVG version of the US – this saved me from having to use a 300mB PNG representation of the states.

Images

mages are the largest type of resource that you’ll likely be loading on your website. Therefore, paying attention to the size and type of images used is very important.

Compression

The first and most important thing to do when dealing with images is to compress them for web. Images coming straight out of Photoshop or God forbid Illustrator are very heavy and generally meant to retain their quality. Photoshop was built for photographers and is therefore unlikely to shed any quality unless you force it. Running all of your images through a compressor will remove unnecessary EXIF data, leftover layers, and leave a much smaller photo.

Popular Compressors:

Event Trigger Images

If you can avoid loafing images at page load entirely, that’s even better. There are multiple ways to trigger an image to load, but the most popular is through lazy loading. JS plugins will watch the bottom of your screen as you scroll down the page and load an image onto the page only when you get within X pixels of the top of the image. This prevents it from loading at render time and keeps a smooth experience for the user.

We took a slightly different route and handled our sandwich selector on the site carefully. Previously, all ~40 images were loaded at page load time – one for each sandwich that Cheba Hut makes. Instead, we hid the sandwich from the page load and only loaded it when the user clicked on the sandwich, thereby expressing intent to view it.

CSS Sprites

Making sprites is the act of combining several small images and loading them in using CSS – thereby reducing the total number of resources loaded on the page. We used this to cut down on the number of images by 6.

Choosing the Right File Type

Finally, choosing between a PNG and JPG image is vital to loading images wisely. PNGs and JPS each have their very specific purpose and choosing the right one can significantly reduce loading time:

PNGs should only be used when the image is extremely simple or has transparency in it. They should never be used for photographs or when there’s a large amount of complexity.

JPGs are best when there’s no transparency – background images, photographs, etc.

By using a JPG instead of a PNG for the map background on Cheba Hut, we dropped 50% of the file size on that single file – over 200kB!

Hosting

Use a WordPress Host

WordPress hosts have the advantage f being able to optimize extremely well for a single application – WordPress and only WordPress. They’re more expensive, but it pays off because they can finely tune their server setup and caching to provide a huge speed boost.

Else, Use a Caching Plugin

If you can’t convince your client to go with an expensive WP host, use a caching plugin. This will compress and zip the files used on each page load and deliver them statically to each subsequent user.

Also, CDNs Are Awesome

Finally, CDNs are a life saver when it comes to performance. A CDN has servers all across the globe and the ability to distribute your cached site from whichever one is the closest to the user, shaving off microseconds or even whole seconds from the delivery of files to the browser.

New Design – Simplicity

in Development

I redesigned my blog this weekend with a focus on the text and photos – nothing else. I quickly found the last design to be incredibly restrictive when it came to any kind of custom layouts or reading more than a paragraph. Poor typography, narrow view for content, and the lack of contrast added up to a pretty unusable design. So, I scrapped it and rebuilt the whole thing to focus on dead simple basic. It weighs in at just over 100 kB 73kB (most of which is Google fonts) without a hero and the CSS is a measly 1.4k. Makes for a lightening fast experience!

I hope you enjoy the easier reading.

Just Stop.

in Development

There comes a time in some projects where you have to just stop.

In the last several months we had to do this with one of our projects. It was built about 8 months ago on a tight deadline with loose parameters from the client and had a LOT of complex content and integrations built into it. We got it done on time but without enough testing and time quickly aged the site. It started to get to where 3-4 issues would be coming into this client’s PM every week and we would constantly be in and out of the site fixing little issues.

So I cut it off and said “we’re stopping”.

All development was halted, all incoming issues tossed into a single ticket instead of immediately addressed and I went through every page of the site on every browser we test on with every device in our device lab and proceeded to fix every single bug. It was expensive from a time perspective, it was pretty painful to see all that we missed, and it took away from our other active projects.

We’ve had 1 bug come through in the month since we did the audit & repair on the site. Where we were spending a couple of hours a week going in and out, we now are bidding out more work for this client and they couldn’t be happier.

Sometimes, you just have to step back, put on the blinders and ruthlessly fix everything that’s wrong on a project. It will be painful short-term but you’ll end up with a smoothly working site and happy clients.

Want to Make More Money? Refactor All the Things

in Development

I recently went refactoring on one of the core pieces of our workflow at OTM. We’ve had a mu-plugin that handles everything from cleaning up the admin section to creating CPTs to setting up the initial settings on a site and installing plugins. It is (was) a handy piece of software that automated a lot of tedious tasks.

But it wasn’t good enough.

My coworkers didn’t like having to manually update references to delete files or slog through 200 lines of code to replace a CPT slug and create a new one on the fly. I didn’t like having to manually delete the install files that got forgotten when we took a site live. No one liked creating test data manually in WP. So, a week ago I took the weekend to start ground-up and see what I could come up with.

~15 hours of work later and I estimate that the revised version will save us 1 hour on every build we complete. Minimum. That means at least 80 hours of saved work a year and a codebase that’s cleaner and significantly easier to maintain and will be easier to explain to a new developer during on-boarding. The relatively small amount of time it took to ground-up rebuild one of our tools will return almost 5X just over the next year alone.

So, if you want to make more money and be more efficient, take a piece of your workflow and refactor it. Dissect it, find the weaknesses, burn it down and rebuild it. Ruthlessly eliminate the chaff and clean, streamline, reduce, DRY.

P.S.: I also open-sourced the tool on Github so feel free to fork it, take it, PR it. https://github.com/oldtownmedia/evans

New Site Design

in Development

You might have noticed that I didn’t post a blog lat weekend – that’s because I was working hard on a new site design for this site. The old design was ~2 years old and was getting quite stagnant. I decided on a whole new look for the site – instead of a fixed header I went with a fixed side header and content all along the right. This makes the site a lot easier to read and use and better responsively. It’s built mobile-first and with the least css possible to make it super light. Hosted on a reseller account on HostGator and just uses a simple caching plugin – no CDN or anything and it’s still very, very fast.

I hope you enjoy the new site!

Moving from one SVN repo to 415 Git repos

in Development

At Old Town Media, we have used a single subversion repo with folders for each site that we worked on for the last 9 or so years. This changed two months ago when we finally switched to Git hosted on BitBucket with a single repository for each and every one of our sites. This has been a huge boon for our workflow and opens up a lot of opportunity for a git-based local environment workflow and automated testing using a service like Jenkins or dploy.io.

Why We Switched

Our old system worked. It was also easy for new people to step into using the very simple Versions app. It also sucked. Versions is an extremely buggy app that throws a fit almost every time you so much as move a file around. Using a single repository for all of our sites meant we couldn’t track our changes effectively or tag them using our project management software. It left our entire system vulnerable to one accidental deletion and on top of that, the SVN repo we had was hosted in our office so there was too little physical separation between an disaster for our checked out copies and the repo copies.

On top of this converting to git and having a single repository for each site opened up a world of possibilities for our workflow. We could more easily tag changes, we could (finally) run branches for different development stages (something Versions is incapable of), we could push our repos through an automated testing service and even deploy using these same services. It would make local development easier because most deployment services offer git integrations, but nothing for SVN and we could push much more easily form our local copy.

In other words, it was a no-brainer to switch to git and actually host our repos correctly.

Who?

The hard part came when I started looking into which service to use and how to switch from SVN to git, hopefully with our full commit histories. There are a LOT of git hosting services, but we needed private repos because almost everything that we would host on this service would be client sites – and it’s not kosher to broadcast that code out publicly. We played with a self-hosted service but that ended up being a disaster to get permissions set up properly and we really have little desire to play sysadmin to our repos. In the end it same down to the 2 biggest services out there – Github and Bitbucket. Github has a fantastic UI and a decent Mac app and was definitely my 1st choice until I saw the pricing for private repos. Github ain’t cheap, kids. So, to Bitbucket we went. The UI is very clunky but their support and API are fantastic and the pricing is extremely reasonable – it’s based on number of users instead of number of repos – basically built for a small agency that pumps out a lot of sites.

How?

Once we finally settled on a git hosting service for the project, we had to figure out how to break out almost 415 folders in a single SVN repository into 415 individual repositories in git without losing our commit history. There are a lot of libraries to accomplish a straight one-to-one conversion but not a single one to convert a single mega-repo into proper individual ones. In the end, we settled on keeping the SVN repo alive in case something went wrong with the move up and individually converting the folders into their own repos without a history to fall back on in the new system.

We had 415 folders – almost all of which were named as URLs and the total size was around 40GB without the SVN files and there were ~ 286,000 individual files. Everything from PSD files to images to readmes for libraries.

Step 1: Download the entire SVN repo

This took a crazy long time – around 3 hours of just chugging through and downloading every possible file into a different folder on my desktop. The good thing about the way we organized the repos in the past is that all of the main folders were at the same hierarchy in the repository – meaning we could simply loop through all of the folder names and run our tasks based on that in the next steps.

Step 2: Make an automator action to spit out an array of all of the folder names

I needed an array that I could use in PHP (also could have been bash, js, etc) to loop through using BitBucket’s fantastic API to initially. Miles built me an Automator action to loop through the folder names and spit out a text file with all of the URLs – and non URLs which we compared and cleaned up the folder names before the import. In the screenshot below you can see the action.

automator

 

When I actually ran my repo creation import in the next step, I clicked on “Results” in the “Get Folder Contents” section which allows you to interrupt the flow and pull the results out as an array.

Step 3: Create Bitbucket repos via the API

Next, we need to create all of our repos using the array that we just got from Automator.

As you can see above, for each of the array items we call on the API, create our repository and print out the returned result for verification. Now, I made a mistake at this stage because I assumed that BitBucket wouldn’t parse the URL for the ‘website’ field on entry and would simply accept anything I put in there. Well, they were smarter than me and they did so I had to go back through and create about 10-20 repos again without the website field.

Step 4: Clone all of the new BitBucket repos to my computer

This is where the really nerdy cool work comes in. In bash, I looped through a slightly modified array of names (taken from the Automator step) and checked out the git repo for every single one of them in a sibling folder adjacent to my SVN checkout.

for i in "${sites[@]}"
do
git clone git@bitbucket.org:oldtownmedia/$i.git
done

Step 5: diff the SVN & Git folder

At this stage I had two folders on our computer. One with 415 full folders & SVN files in it and one with 415 empty folders and git files in it. Now is when I deleted all of the old SVN files with “rm -rf .git”. This shaved about 20GB off the total folder size.

To copy files from one folder to another without overriding any duplicates, Mac provides a simple command called “ditto”. With ditto, you feed it the path of the folder that you want to copy from and the path of the folder that you want to copy to and it will do all of the diff work and copying of the files and it’s damn quick. ie:

ditto ~/Documents/MyFolder ~/Desktop/MyNewFolder

Now, you have two identical sets of folders, with the exception that one has git files for each repo and is ready to stage to push.

Step 6: Push it. Push it real good.

Just like I looped through the site array to pull down all of my repos, I looped through the individual folders and committed/pushed everything at once. This took about 5 hours so I set my computer down for the night and took a look through everything in the morning.

for i in "${sites[@]}"
do
cd $i
git add .
git commit -m "Initial git commit"
git push origin master
cd ../
done

And just like that, we had 415 individual, properly named git repos to use. Now, let’s add up the total time it took to run everything:

  • 255 seconds to create 415 repos via API
  • 12 min to clone them to my CPU
  • 5 hours to push all files to the repo master branch

Not bad for a move to a completely new versioning system and breaking up a mega-repo into individual usable ones!

Apply Here If…

in Development

Interviewing and hiring developers has been an eye opening experience for what to do when applying for a job. The biggest thing that got my attention was how import the little things are when applying – specifically how easy it is to tell if someone cares at all by things like spelling errors, attention to application instructions, and their online profiles. Here are a few tips for anyone applying at Old Town Media.

Spellcheck. Holy cow, just spellcheck

Spelling and grammar are the most basic of job tasks. If you can’t be bothered to check for misspelled words when you’re applying for the job, how can I trust you to write intelligent emails to clients, communicate with our team or write helpful code comments? It shows a complete lack of attention to detail and, frankly a lack of “giving a crap”.

The initial email and CV/resume should be the most polished thing you’ve ever written because I WILL judge you on it. We often have too many applicants to waste time on someone that doesn’t care.

Don’t give me what I gave you

On one of our recent hirings, about half of the applicants tried as hard as they could to get the exact same adjectives/wording in their cover letters as we put in the job description. This is not only lazy and unimaginative, it’s downright pandering. I’m interested in employees who have their own individual thoughts and ideas – I don’t want someone who will just spit my words right back at me.

Now, I know this is what a lot of university career centers tell their graduates to do. But you should ignore a lot of what they told you.

Don’t give me templates

I can tell if your cover letter is a template that you replaced a few words in. If you’re thinking about sending us a template, instead write a thoughtful paragraph. I would prefer a single short paragraph about yourself to a wall of text that was clearly brilliantly written for another prospective job two years ago.

Have an online presence

The first thing I do after reading a resume is Google someone. I get concerned if I can’t find anything about someone on Google. That means either a) you’ve done a magnificent job of hiding something about yourself or b) you’ve made no mark on the Internet at all. You can’t have experience in developing well and not made any mark on the Internet.

Make it easy on me and include links to several relevant profiles that I can check out. It’s better than me Googling and finding the drunk college version of you on a slip and slide.

Have patience

I’ve been there. Waiting is downright painful. Did they receive my resume? Did they not like it? Is the job still available? Probably yes, no, yes. As a small company it takes a lot of time to parse through resumes and if we’re hiring someone it means that we’ve run out of time for us to do work which means we’re reviewing resumes during or 50th hour of work that week. It’s painful but be patient and it will be rewarded.

My First Public Plugin: Simple Frontend Template Display

in Development

I published my first public plugin to the repository yesterday – it’s a simple little plugin that displays the current page template name and filename on the toolbar.

You can find it here: http://wordpress.org/plugins/simple-frontend-template-display/

Screen Shot 2013-12-03 at 5.33.45 PMWhy build this? Frankly, it got really tedious to switch back and forth between the edit screen and frontend just to find a simple piece of info. So, I built it and it ended up saving me a bunch of time. I figured I’d publish and help other people to speed up their development time as well.

I’ll be putting up the plugin on Github so anyone can contribute and pull it down separately. Hopefully you all will have some great ideas for it!