Being the sole developer on the Marketing team (and leading contractors during busy periods), projects that land on my desk (some self-assigned) can be as simple as adding or updating an image or text on an existing page, to as complex as architecting and building a solution to solve whatever the needs are of the team at that time.
Some examples of the latter are projects like updating the blogging system the team uses or creating methods to allow us to control and monitor internal links throughout the site for ease of updating in the event that content and/or sections are relocated in the information architecture.
Below, I have gone into detail about some of the tasks that I was given, how I tackled them and what the end result turned out to be.
When I started, there was already talks happening within the team about what could be done with the old and dated system that was being used to serve the Blog portion of the website, which at the time was Expression Engine.
The Marketing team wanted to replace and upgrade the blog software because it was due for a redesign, slow to use and also render pages to readers, was subject to spam comments which were difficult to maintain, the UI was just painful to use and just was not flexible to modify or add features.
The Operations team also wanted to do something about the system as it was database driven which meant it was open to potential XSS attacks.
Armed with the requirements that the new system had to be up-to-date and easily kept up-to-date, easy to use and familiar to all existing and new Marketing staff that would potentially be using the system and also be secure so there was no potential threats to our systems - I had a tough task ahead of me.
A Wordpress install was an obvious choice - most widely used CMS meaning almost all staff would have used it at some point and if they hadn't, Wordpress UI is so straight forward and easy to use that it wouldn't take anyone long to get familiar with it. However the problem still remained that it was database driven and public facing so we were potentially going from one security hole to another - something Operations weren't keen on understandably.
With this in mind, I had to think out of the box. Page speed load time was also something I had to consider - loading pages directly from a database is always going to put a strain on that figure. Caching the resulting page is one way to get around that however doesn't fix the security side of it. So I took it a step further.
Ops built me a private server, only accessible on the the company network - so immediately it was secure given the entire environment wasn't public facing. Still needing to get the resulting pages live, I wrote a plugin for our Wordpress environment that created a CSV list of URL's to crawl via PHP, kick off a Python script to read the CSV file, cURLing each URL, saving the response to a HTML file in the right folder structure for the page. Once this was complete, the Python script starts a build method that grabs the files from the sites GIT repo, runs them thorugh a series of checks and linting, pulls the newly created static files from the Wordpress environment and builds the site onto all our servers under our load balancer.
As this method detaches the page from Wordpress entirely, traditional functions and plugins that provide public facing features that feed back to the database no longer work - comments being one of them. Simply by using a third party software to handle our comments, we get around that issue - Disqus was the software of choice for this.
Once the system had been put through its paces and it was proven that it worked effectively, I was then able to proceed with the migration of the data out of Expression Engine and into Wordpress. Carried this out by writing a Python script to query the data required and map it in a format that worked for the Wordpress database - this included all the authors and user accounts. This was a big deal as we also hooked the WP login to our internal user management system so not only all staff's accounts were added by default, but they didn't have to remember another login in order to use it.
Using the email addresses as the unique identifier between EE and WP, I then had a user map array to use when running though the 1200+ blog posts we already had. Whilst going through each post and migrating the content into the WP database, I was able to also manipulate some of the data to change certain links and image paths as well as download and store the images from the EE location into the WP folder structure by using the posts created date to follow the WP uploads folder structure - /year/month/file.ext.
On its completion, it was a matter of checking a number of posts to ensure the content had ported over ok, check that the content worked with the new styling, made some tweaks to some posts and setup 301 redirects from the old URL to the new - it was ready to release!
The process not only meets the requirements of all parties involved but also opens the ability to not only manage the blog in such a way, but any area of the site that would suit a CMS like environment (further info on this in ).
Spending my time duplicating page layouts and updating copy for a new page, is not the best use of my time when I should be spending that time concentrating on bigger, more complex, projects.
The content writers and designers spend time putting content into Google Docs or into their designs which, once on the web, may not work as or fit as they intended. This then creates a vicious loop of iterations where not only my time is wasted, but multiple peoples time in the process.
This is the type of thing a CMS is perfect for! Making something technical simple by providing a GUI for those less technically minded. And as already stated above, I have already setup the perfect environment for this above.
Expanding on the already proven and tested method, modifying the environment to handle multiple sections of the site wasn't overly difficult. Initially the environment was setup to only handle the blog from its root - meaning it would generate the files from the category or post level and transfer directly into the /blog/ folder.
All that was required was to modify the environment up a level, so blog is now a section of the environment and not the environment entirely. This automatically forced the generation of the files up a level also so all that needed to change in the process was where the files were dumped, which is now in the root of the website files.
From here, its just a matter of creating a new custom post type in the WP admin which creates a sites section. At this point, the sky is the limit on how the template and the admin is setup to input and output the content. Adding the section to the generation script is simply a matter of including the post type name in the script, it then crawls that branch of URL's and creates those files.
The internal Wordpress environment now holds around 10 sections of the overall site that are managed in a templated fashion. Utilising custom permissions allow me to target certain users throughout the company to only have access to certain sections therefore allowing them to concentrate on the sections they need without cluttering up their workspace with areas they don't.
After optimising the build process to allow either only latest files since last generation, or full generation of a section, process time of a section can be as quick as a handful of seconds. This provides the ability to move quickly and get changes and tweaks out fast if required.
After the blog migration to WordPress happened, the thought of being able to maintain other parts of the website—that fitted a templated model—was very desirable to a number of departments in the company. It would also enabled me to concentrate on more complex tasks instead of spending time on the mundane "content management" tasks that a system like WordPress does so well for the non-developer.
We proceeded to convert certain sections over to the CMS as projects came up to be done/thought about again. One of these sections was the resource guides that were originally maintained via static files. At the time, it was a straight swap from static content into CMS format using a template, along with a cheeky redesign. At the time, this was sufficient for our needs, however was far from future-proof and this obvious very quickly.
As we grew out our resource library, we not only increased the amount of guides we had, but also started to build out other types of resources like webinars, infographics and glossaries which we had not home for to make discoverable. Along with that—the blog, which is technically a resource, was completely separated from the resources and we also have tools like our bulletproof button generator and our complete list of usable CSS in email that were so difficult to find that I doubt anyone found these via the website.
We had to do something about this to provide a better experience for our readers.
The project "Resources V2" was born. I put the notion forward very early on that this was not one of those projects that should be rolled out quickly. Lots of time, discussions, discovery and planning would have to go into this project all before design or development were to even start thinking about getting dirty on this epic section.
The content team started off the process by getting examples of other resource "hubs" and "libraries" that they liked either the look of, the UX of, or certain elements of that we then used for discussion amongst the resources build team to work out what we wanted our section to be. We considered third party software to manage our content, however as always we found that we were starting to fit our needs into their rules and sacrificing some things instead of having everything we wanted. A custom built system was always going to be the way to go.
Once we had a laundry list of functions, design styles and UX ideas that we wanted, the designer underwent numerous wireframes over a number of weeks all with meetings with the team to discuss and refine until we had something that felt right. At this point, I had something to work the structure of the content around in terms of categorisation and functionality for me to start work on whilst the designer put her finesse and detail to turn the wireframe into a design I could implement from.
All this prior planning made for smooth development time. I had a mammoth task ahead to ensure the close to 2000 resource articles modified to match the new changes, that they were listed in the new search ability to enable all articles to be found easily based on keywords and filtering and that the categorisation of the articles was sound to be able to create a smooth UX experience to navigate through in the event the reader just wanted to see the types of articles we have on offer.
After a lot of behind the scenes work on the admin to create a better way to create, store and categorise the articles along with the templates built in a way that we could add on new article types and both the search and category pages would work in with them without any modification, we feel this project was a complete success. Visually it's a huge step up from what we had in the past and stats have shown after a month of the pages being live that the drop-off rates have reduced considerably as the users now have multiple ways to continue looking around the Campaign Monitor resources section with ease to find what they are looking for.
The Campaign Monitor site is a huge one, at ~12,000 pages and growing every day. Interlinking strategies to enable a user to move around the site fluidly are always a major priority in a site and the Campaign Monitor site is no different.
But managing all these internal links can be a potential problem if a page is ever relocated and the structure of where it lives changes, or the page URL name changes for SEO reasons. Having to find what pages had been linking to that page can sometimes be a difficult process.
A process for this was already in place where a CSV of URL's and "keys" were stored in the file and a PHP function was then used to create a link by specifying the key. This then allowed us to just add or modify this CSV file when a new page was added or an exist page was updated or removed. If a key doesn't exist when called, the link then defaults to the homepage - never a broken internal link.
Great in theory right? But I quickly found that with multiple people helping me work on the site, mainly designers that knew enough HTML and CSS to help me tweak pages, the CSV file was often forgotten about and rarely updated. This is a problem in itself.
Automate. I wrote a script in python to run during the build process when going live, which crawls all the PHP index files reading data housed in a PHP comment at the top of the page. This then populates a CSV file with all the keys and URL's the script finds.
From this point, the existing functionality takes over. To further the usability of this feature, I added a search option in the internal Wordpress system which searches the CSV for keywords, returns the results where clicking on a result displays the PHP code to be entered on a page with the pages key already populated within the code.
We now have an always up-to-date link manager that updates all links referencing any given page in the event we ever move or rename a path to ensure we never have any broken internal links.
This has also created a way to ensure the feature is used with an easy-to-use search box.
SEO is a huge factor in marketing when referring to organic traffic and organic prospects. Broken or redirected links alone can have a negative effect on a pages SEO and ultimately a sites SEO.
With Campaign Monitor's existence ticking over the 14 year mark this year, you can only image the amount of content that has been created over the years. Managing and maintaining links across all those pages (~12,000 from time of writing) is next to impossible.
I've never been a fan of the impossible so coming up with a solution to easily identify any broken links or images within the site and providing a detailed report on any link or image that didn't return a 200 status, what that status was, and what page it was found on.
I decided to build out a process via a python script that would initially crawl the home page for any links or images and add them to an array. Once finished, that array would be looped through and checked. The results are then added to another array with the URL check as the key. Within the initial loop check, this array is cross checked to see if that URL have been crawled previously to prevent unnecessary load by iterating on the same link multiple times.
Results are then stored in a json file and used in an output GUI for easy readability for the content writers to go through and fix up old posts and pages that contain broken links. A database table we created to allow the ability to flag fixed links so multiple people could run though the list at the same time and not double check broken links that were already fixed, the ability to add notes as to why that link may not have been able to be fixed at that time, or flagged to be ignored in the even that the crawl returned an incorrect status for some reason.
Python was chosen for the job as its process time for something like this is alot quicker than, say, PHP but had enough logic power to enable to me run and manipulate, store and save data as required.
Upon the initial build and first run, over 6000 broken links and images were discovered throughout the site, the majority within year old or more blog post and within blog comments. After the first few passes of the list within the first month, we had fixed close to 60% of the links and continue to run though the list as time permits.
This project was always a long term goal fix given moving though thousands of broken links and images on pages is rather time consuming work that is carried out around other projects.
This resulted in pageloads close to 20 seconds in some cases (complete load, not just visually). I was tasked to reduce this.
When I started at Campaign Monitor, I was told early on that there was always a desire to build really cool Annual Report / Year in Review pages to match with other companies in the tech space, however marketing were never able to as they never had a dedicated developer on their team... until now. So nearing the end of the year in my first year, sure enough, the project of our 2015 Annual Report landed on my desk.
Working together with one of the designers on the team, we set out together to come up with something that would stand out. As we hadn't done one of these before, we weren't given a whole lot of time as we were unsure of how successful these would be for us so that had to be kept in mind. The designer went off down his discovery phase, using other examples of Annual Reports for reference, taking the pros and cons from them. Floating his design and functional ideas passed me to ensure we could achieve his vision within the timeframe, he set out to complete his design.
The following year, the task rolled through again — Annual Report 2016 — a little earlier on in the year to give us more creative time to achieve something a little bigger and a little more "out there". Together with a different designer this time around, she had already done a bit of leg work and roughly knew the vision she was going for. Once given the actual project time, she didn't waste any time coming up with a bunch of ideas we could move toward.
The team came up with a marketing idea to cover a number of items such as company awareness in the form of fun sharable content and learning in the form of asking the user (prospect or customer) a number of questions about their email marketing habits and usage, and giving them a score based on their answers along with a nurture series of emails.
Not an easy feat but the results speak for themselves. Starting with a nice full screen splash page allows time for the Marketo form to load in, once doing so, fading in a nice CTA to move down the page to the first radio options question. This question determines a few things in the following questions and once selected, provides a nice transition to the fading in remaining questions.
I've got to admit - I was really happy with outcome of this project and how much I was able to achieve using some crafty CSS to manipulate a standard marketo form into something that is not look like a form at all.
See for yourself: Email marketing scorecard
One of the Campaign Monitor marketing strategies is to have pages dedicated and targeted to a specific industry that would benefit from email marketing in a particular way. These pages can quickly become very similar to a features like page, where we point out the particular features of our product that would benefit that industry the most.
So from this came a want to show this in a different way, at the same time as create a page that is appealing to the eye and prompt shareability which increases outreach and brand awareness at the same time.
As one of the main points behind the page was for these to stand out from the crowd and prompt them to be shared amongst peers, it was inevitable that these would include a higher-than-normal amount of animations.
The results speak for themselves really - was quite enjoyable to work on, and the challenge included around ensuring they worked correctly responsively was a fun one to overcome.