Since the dawn of search engines, there have been tactics designed to take advantage and game their algorithms. For the past 20 (or more) years it has been a cat and mouse game for the search engines trying to modify their code in an attempt to maintain a search engine with quality results.
I do want to briefly qualify myself as a blogger that can at least somewhat accurately represent this era and industry. Although I didn’t start considering myself an SEO until 2009, I’ve been very active on the web wince 1995 in the Linux, web development, and open source community. You can even check out my old geocities website from back in the day.
The attitude in the 90’s and early 2000’s was pretty much “anything goes.” Sure, Google had guidelines, but it was kind of like the way a mattress tag says “do not remove under penalty of law.”
In other words, no one really took them seriously. Remember, Google really was not dominating the search engine game until 2007-08. There were a number of search engines vying for position including Yahoo, Webcrawler, Excite, Lycos and a number of others. Most of these search engines had fairly game-able algorithms, ripe for the pickings. I found a list of about 1600 search engines from that time during my research.
Sometimes referred to as “text on the same color as the background” was an easy way for webmasters to stuff as many keywords as possible into a page without hurting the user experience. If you were on the web between the years of 1996 and 2002, chances are you’ve seen a lot of this. An easy way to spot hidden text is to look for large blocks of whitespace, usually in the footer.
This method was accomplished using a number of different methods. In some cases the webmaster simply made white text on a white background. Other times it was done using CSS (advanced back then) or a few other ways. Some webmasters used a light grey text on a light-er grey background. This method is basically the SEO equivalent to saying “hey Googlebot look at all my keywords and yes they are really part of the body of this text.”
This was an extremely common tactic and very annoying and frustrating anytime you landed on one of these pages. Some webmasters would paste 10’s or even 100’s of thousands of words on a single page. You could scroll and scroll for minutes and not reach the bottom of the page.
At one point Google was actually granted a patent relating to hidden text and hidden links.
Guest book spamming
Raise your hand if you had a guestbook. I sure did. Guestbooks were pretty much one of the precipitators of web 2.0 and allowed users to “comment” on “weblogs.” Dropping your URL into a guest book comment is much like blog comment spamming, only easier. In addition to placing links for the purpose of backlinks value. many of these comments were crafted in an effort to drive traffic to a website.
During my research of this I found 1 guest book spammer that was so prevalent that I found 2 of his comments from 1998/99 within about 10 minutes.
Now guest book spamming was a major webspam tactic well into the 2000’s however started to lose some momentum once web 2.0 frameworks started popping up such as WordPress, Joomla, Pligg, etc.
Meta Tag Stuffing
What blackhat SEO history lesson would be complete without mentioning meta tags. There was a time when Google placed an emphasis on the meta-keyword tags. The rule of thumb for blackhat SEO in the late 90s was “stuff as many keywords into as many different places as possible.”
This snippet of HTML source was taken from a website archived in 2001. As you can see the webmaster stuffed keywords into the meta description and keywords tag (amongst other places within the body.)
While we are on the topic of meta tag stuffing I may as well bring up a very similar, and lesser known blackhat SEO topic “HTML comment keyword stuffing.” Like most languages, you can leave comments inline within HTML documents. Webmasters would use this opportunity to stuff as many keywords within these comments in hopes search engine bots would pick them up. As with everything, some webmasters were more blatant about this than others:
Stuffing keywords in HTML comments is not a very large part of blackhat SEO history, but it should be mentioned for the simple fact that I’ve personally seen it so many times within the past 20 years.
In addition to meta tag stuffing, tactics such as img alt tag stuffing, header stuffing, body content stuffing were very popular. I left those out because it was kind of redundant to illustrate.
Anyone who looks at the name “webring” can pretty much figure out what they were. Essentially, all sites linked to each other in a circular structure. There were no exact rules for webrings to follow. Some webrings required you to embed a specific HTML code on your site which would accomplish the following goals:
- send a 1 way link back to the owner
- link “forward and back” to other sites in the web ring
- promote and brand the web ring itself
Webrings were really popular at the end of the 90s. At one point Yahoo acquired Webring.com but that never managed to take off. Like many of these fads, webrings are pretty much non-existent in today’s internet culture. Some sites would enter multiple webrings hoping for multiples of traffic and backlinks.
Most webmasters saw a webring as a way to generate traffic and links for their website. While this might hold some truth, the owner of the webring really got the full benefit with so many 1 way links pointing back to their domain.
Some webmasters would enter into very simple and informal agreements to be part of a mini-webring. Others would join larger sites such as Webringo.com (still alive, it seems).
Other webmasters went really crazy with the whole webring idea and started to create all of these guides and ebooks on “owning” a successful webring. Quite insane of you ask me.
Webrings were pretty much a hot mess. There are many web trends I am happy are no longer here, web rings are at the top of that list.
Sadly, this is one trend that is somehow alive and well. Maybe not “well” but alive. A link exchange can mean a number of different things.
Essentially a link exchange is a service you sign up for where you agree to embed HTML on your website in exchange for links from other websites.
Now, the real selling point of these links is that you get relevant links from relevant websites. So if you own a goat farm website you will only get links from other goat farms, and you will only link out to other goat farms.
LinkExchange.com was one of the most popular services out there, but there were 1000’s if not 10’s of thousands of competitors as well. Microsoft bought them out in 1998 for $250mm in hopes to “targeted information about users on the LinkExchange network, as well as gain another channel to promote its services.”
Link exchanges exist today and are still being used by many websites, mainly small businesses that get suckered in via email marketing or web advertising. Most of today’s more popular link exchanges do not promote themselves as much as they used to out of fear of being outed or identified by Google.
Today, you can usually spot a link exchange on many pages titled “links” or “our partners.” Doing a quick Google query with some advanced operators will fish out hundreds of link exchange participants:
inurl:links.html "our partners" or "link exchange"
And just remember, if you can spot a pattern that easy, Google can find it 100x easier.
Let’s just use Google’s definition for this one: “Doorway pages are typically large sets of poor-quality pages where each page is optimized for a specific keyword or phrase. In many cases, doorway pages are written to rank for a particular phrase and then funnel users to a single destination.”
If you’ve been on the web for some time, you’ve seen doorway pages. Again this optimization strategy is alive and well today, but not nearly as prevalent as it was decades ago. I found a great example of an eCommerce site that uses 100’s of doorway pages for each manufacturer and category. 10’s of thousands of doorway pages on this site.
Once the visitor has selected the category or manufacturer, they are taken to a landing page where they can purchase or are led to an affiliate.
One industry that used a ton of doorway pages is the acai berry industry aka “berriez brah!” I’m not even gonna go there right now, but the diet industry generated hundreds of millions of dollars during this era and doorway pages helped drive this commerce:
Obviously the reason why doorway pages were so popular is that they worked so well for blackhat SEO. The problem is that they really stopped working a long time ago and many SEO’s are still stuck in the past. I understand why so many people link spam, it has been shown to show short term results but Google pretty much whacks pages with lots of doorway pages right away.
When you crack open the source code of a website that uses doorway pages and “think like Googlebot” does for a second, its obvious how these sites get whacked so easily. If you’d like a good read on doorway pages check out this post from Matt Cutts from 2005 on doorway pages.
Directories / Alternative Search Engines
Everyone and their mother had a search engine. Remember this was “the bubble” search engines were quite plentiful. In those days search engines and directories were essentially the same thing. The directories were basically searchable so why not call them a search engine. It wasn’t until years later when Google hit the scene that “smart search engines” became a thing. Search engine submission was a very popular term during this time, being that submitting your site pretty much guaranteed inclusion within the rankings. The more the better.
Many of these search engines / directories actually required you to link back to them. Could you imagine if Google required all websites to link to them in order to be included in the rankings?
I just wanted to start off by saying that there were a number of very legitimate and ethical SEO companies that date back to the mid 90’s (that I could find.) With that said, shady SEO’s and straight up “bad deals’ have been around just as long. One of the most common scams was the “search engine submission” service. These services would basically submit your site to as many search engines as they had in their database. Some of these services were done by hand manually, others were simply bots or web apps that submitted your site automatically.
These services were literally a dime a dozen, and many of them sadly still exist today. The next tier of SEO services during this day would essentially do the same thing, but add in some extras such as on-page SEO. Some of them would go as far as creating and uploading of a site map for you. Not a whole lot of inbound marketing or content marketing going on back then.
Blast Engine is one that I remember myself, and maybe even signed up for at one point. They really had it down to a science and probably brought in a few million in sales easily.
I was elated when I found an old ad for SEO services from year 2000 during my research. Here you can see a well rounded representation SEO service offerings from low to high.
SEO consultants thrived during this era due to the simplicity of those search engines, however the market was not what it is today. Very few companies existed on the web pre 2005 and eCommerce was a very scary thing for many people during that time.
The image above is a screenshot from 1996, as far back as archive.org goes and is the company that Danny Sullivan started. The company website still appears to be up and running and has kept much of the same design, tables and all. I do want to mention that Danny is a real stand up guy, the pioneer of our industry and I don’t associate him, his blogs or his company with black hat SEO.
This is a hairy SEO topic that is difficult to illustrate and was hard to find examples of. Cloaking is basically when a site shows one version of a page to a user, and another version to search engines. For the most part cloaking is done via the sites htaccess or robots.txt.
Example of cloaking:
- Website Version A – intended for search engines, contains pure text and HTML, has lots of keywords, images and meta tags
- Website version B – intended for users, contains lots of images and quite often years ago Flash as well
Cloaking was very popular 10 or more years ago when Flash was very popular however very non-optimal for SEO. Matt Cutts wrote a really interesting post on cloaking in 2007, the comments also raise some really interesting points as well.
Redirection, of the sneakiest degree
In the history of blackhat SEO, sneaky redirects have been one of the most deceptive and misleading tactics I’ve seen so far.
One example I’ve personally seen over the years is a website that has been hacked and all links are cloaked or redirected to an affiliate site. Some redirects are so sneaky that they locate the IP address of the webmaster (usually the top 3 IP’s logged into the server) and exclude them from the redirect, so that they see the “real” version of the site.
Many times a webmaster won’t even find out that this has happened to their site until someone else informs them of this.
Uncategorizable Blackhat SEO
Some of the “SEO” that I’ve seen throughout the past 20 years is just so insane it cannot be categorized. I am so happy that I found this most perfect example:
I mean, what the hell is this anyway?
Keyword stuffing? Check.
Link scheme? Check.
Poor user experience? Check.
Doorway pages? Check.
Scraped content? Check.
Some websites went so crazy with blackhat SEO they just used the “lets try everything” approach. Sadly some of these sites
Other websites would straight up incentivize in exchange for a link. You’d see quite a bit of this:
What did I miss?
I know I missed a lot. I know I didn’t cover the whole “gibberish content” category or scraped content that much.
Is there any area that I completely neglected or forgout about?
I wanted to say a big thanks to archive.org for archiving all of these wonderful websites. Please donate to them if you can!