Tips To Help You Optimise Your Web Projects

Written by Epic Monkeez on | Tiny Url

We come across many beautifully built web sites, that clearly have been carefully crafted and precisely engineered. However on closer inspection, we discover some very important pieces are missing. Examples of this are web projects that are missing important meta information, resulting in the loss of valuable traffic from search engines. Images and important files that are available for download by unwanted third parties, could have easily been protected.

Tips To Help You Optimise Your Web Projects In 2013 there is no excuse for these things to be left out. Clients and website owners deserve maximum return from their investments and websites that can be found and seen. It’s a shame that this isn’t always the case, especially after all the hard work that the client, website owner, the designer and developer have put into it.

Here are 5 quick and easy improvements for you to implement, to ensure your web projects (and traffic) are optimised, polished and finished to a high standard.

1. robots.txt

Adding a robots.txt file to your web site enables you as the site owner to control and give instructions about your site to web robots (the type used by search engines for indexing); this is called The Robots Exclusion Protocol. robots.txt also allows you to communicate your sitemap location therefore limiting the request rates on your sites. Without robots.txt you are allowing bots full access to your web server by default. Some bots hit sites so often that they slow the performance of the sites down and therefore damage the user experience, this is not good.

    Some important points you should take into consideration when using robots.txt:

  • Some robots can ignore your robots.txt. In particular malware robots that scan the web server for security vulnerabilities and email address harvesters that are used by spammers, these type of bots will pay no attention to this file. In this case you should always try blocking them using your web server, for example with a .htaccess file.
  • The robots.txt file is a publicly available file. Anyone can see which sections of your server you do not want robots to use.

Here is an example of a general robots.txt file disallowing robots access to the folder ‘archive’:


  User-agent: *
  Disallow: /archive/

Usage: place the robots.txt file in the root directory (the first or top-most directory) of your web server.

Useful robots.txt resources:

2. sitemap.xml

Sitemaps are a way to tell search engines about pages, posts, or content on your website that they might not otherwise discover. In its simplest terms, a sitemap is an XML file, again placed in the root directory of your website server, with a list of the pages on your website. Creating and submitting a sitemap helps search engines ensuring they know about all the pages on your site, including URLs that may not be discoverable by search engines robots during normal crawling processes.

Example of a general sitemap.xml file:

<?xml version="1.0" encoding="UTF-8"?>
<urlset xmlns="">

Usage: place the sitemap.xml file in the root directory (the first or top-most directory) of your web server.

Useful sitemap.xml resources:

3. Microdata

Many sites are generated from structured data, which is often stored in databases. When this data is formatted into HTML it becomes very difficult to recover the structure of the original data and while the people reading your content understand the structure of what they are reading, search engines have a limited understanding of what is being discussed, as they don’t understand the content.

Microdata is a set of tags, introduced with HTML5, that enables you to change this. Many applications, especially search engines, can benefit greatly from direct access to structured microdata.

By extending the HTML document with on page markup, it allows search engines to understand the structure of the information on your web pages, providing richer search results and making it easier for users to find relevant information on the web. Markup can also enable new tools and applications to make use of this structure.
As you can see in our example, by adding additional tags to the HTML on your web pages, tags that say — “Hey search engine, this information describes this specific movie, or place, or person, or video” — you can help search engines and other applications better understand your content and display it in a useful, more relevant way.

Example with the original HTML code:

  <h1>Twelve Monkeys</h1>
   <span>Director: Terry Gilliam (born November 22, 1940)</span>
   <span>Science fiction</span>
   <a href="">Trailer</a>

Example with on-page markup:

<div itemscope itemtype="">
   <h1 itemprop="name">Twelve Monkeys</h1>
   <div itemprop="director" itemscope itemtype="">
   Director: <span itemprop="name">Terry Gilliam</span> (born <span itemprop="birthDate">November 22, 1940</span>)
   <span itemprop="genre">Science fiction<span>
   <a href="" itemprop="trailer">Trailer</a>

Useful on-page markup resources:

4. Keyhole markup language (KML)

A KML file is a file that contains geographic data for Google Earth, Google Maps, and Google Maps for mobile. KML files are XML formated and store geographic modeling information. These files contain lines, points, images and polygons. They are used to label locations and create overlay textures and camera angles for the maps that are created by the mapping applications. It is fairly easy to create a KML file and add your company or store location to Google Earth.

Example of a loaction.kml file:

<?xml version="1.0" encoding="UTF-8"?>
<kml xmlns="">
    <name>Store location</name>
    <description>The location of our store where we sell fluffy monkeez all over the world.</description>

Usage: place the my-location.kml file in the root directory (the first or top-most directory) of your web server and add the following code to the head section of your HTML document.

  <link rel="alternate" type="application/" href="my-location.kml">

Useful KML resources:

5. humans.txt

Humans.txt is an initiative for knowing the people behind a website. It's a .txt file that contains information about the different people who have contributed to building a website. While looking at some great websites somewhere on the internet, have you ever asked yourself "Who made this web site?" or "How did they build this web project?" This is where humans.txt comes in, presenting the information about authors and developers for visitors, in a simple and unobtrusive way. Even the NY Times does it (although they have poorly formatted it), why shouldn't you?

Basic example of a humans.txt file:

/* Humans responsible & Colophon */
   Site name: Epic Monkeez
   Site URL:

/* TEAM */
   Your title: Tanya Gray, Arjan Terol
   Site: hello[at]
   Twitter: @EpicMonkeez
   Location: The Netherlands

/* SITE */
   Last update: 2013/09/12
   Host: Webfaction
   Standards: HTML5, CSS3,, Microformats
   Components: jQuery, Responsive, Isothope, Zurb Foundation
   Software: Sublime Text, Compass, Sass, Transmit, Codekit, Git

Usage: place the humans.txt file in the root directory (the first or top-most directory) of your web server and add the following code to the head section of your HTML document.

  <link href="humans.txt" rel="author">

Useful humans.txt resources:

So there you have it, our 5 tips to help you optimise, polish and finish your web projects to a high standard. Within an hour, of writing some code, adding some effort and a few files, it is easy to better control web robots on your website, to take control of what content and which pages you want to display on search engines, to pin-point your business precisely on maps and to credit your contributors or builders.
We hope you enjoyed our post and that it is useful for you in the future. Please feel free to add anything you think we have left out.