According to a press release on sec.gov, Colt Defense LLC has acquired New Colt Holding Corp., which owns Colt's Manufacturing Company LLC. Here's the 8K. This seemed to be a possibility when Colt bought out Blackstone's (an investment group) shares in Colt Defense back in March.
Back in 2003, Colt split itself in two in an effort to shield their substantial defense assets from civil liability. Such lawsuits haven't had much of an effect on any company in the industry and the split probably had a significant (negative) effect on Colt's efforts to develop new products. Meanwhile, the rest of the industry went full speed ahead with all kinds of new product development. Companies such as Smith & Wesson and Ruger sold zillions of AR-15s while Colt commercial rifle sales flatlined. Defense sales are down and will be down for the foreseeable future.
Therefore, it makes sense for Colt to consolidate and invest some money in bringing new consumer products to market and perhaps improving their ability to produce and market old ones. There appears to be no intent to move the company out of Connecticut, but their factory is a bit out of date compared to other companies in the industry, and they'll need to make big strides to catch up to where they could have been a few years ago.
What does this mean for consumers?
It should mean a lot more Colt rifles on the civil market. It should also mean that Colt handguns (pistols and revolvers) will have more money and time devoted to them from a design and manufacturing standpoint.
I'm looking forward to seeing what Colt has to offer at SHOT 2014 and 2015.
What follows is a very long article. If you read the whole thing, I applaud your dedication. If you're pressed for time, just read the bold section titles; you'll get the drift.
Note: I attempted to discuss some of my concerns with a CD staff member at the event and did not receive a meaningful response. They are welcome to comment here.
Over July 4th weekend, my friend Paul and I competed in the 2013 Competition Dynamics 24 Hour Sniper Adventure Challenge. Last year we gave it our all and placed third; this year we also gave it our all but had to drop out early due to medical issues (partial knee dislocation) on my part. Because we were doing well up until that point - and because of what many see as scoring irregularities - this year we placed fourth. I'm auctioning off my main prize for charity.
I was rather unhappy with the way the event was organized and run this year. I stated as much on Facebook even before the results came out. It's been a week since the event, during which time I gave myself time to think about everything in detail and solicit the opinions of friends and fellow competitors. What follows are my words alone, but many of the same opinions were voiced by others at the time of the event.
For those who are unfamiliar with the event, it's a race during which "two-man teams will be required to navigate at least 30 miles on foot to complete the course. Along the way, there will be a series of tasks to accomplish to gain additional points. These tasks may include: shooting problems with long-range rifle, carbine, and pistols; problem-solving; physical challenges; fieldcraft; communication; target recognition; memory; and other tasks."
Naturally, it's the sort of event that attracts totally awesome competitors, and I met some truly great guys (and one gal) who define what it is to be tough and resilient under harsh conditions. As with last year, I stand in awe of the folks who placed higher than we did - they are amazing individuals. That said, anyone who read what last year's event entailed and decided to show up this year deserves respect.
I feel that I should preface this article by saying that I have a lot of respect for Zak Smith of Competition Dynamics, not only from an intellectual standpoint, but also from a personal one. I appreciated his lending us a CD-owned SPOT tracker after ours failed to connect just prior to the event. My surprise at the way this event was conducted was only heightened by my previous (very positive) experience with CD.
Problem #1 - The Event Location Sucked And So Did The Lodging
While there were a number of serious issues with the event, many sprang from one root cause - Felix Canyon Ranch, where the event was held, was utterly unsuitable for the task. Theoretically, an adventure race which takes place in harsh terrain could be held anywhere harsh terrain may be found. In reality, there are a number of considerations which should dictate the elimination of certain venues. These include access to the area from major metropolitan centers, layout of the area from a medical team access standpoint, and the availability of quality on-site lodging.
Felix Canyon Ranch was approximately 90 minutes away from the nearest town of any decent size, Roswell. It was several hours from any airport to which a commercial flight could be found, limiting access to the event for many previous competitors. It was also several hours from the nearest medical center in Artesia. None of these alone should have completely eliminated this location from consideration, but the lodging (or lack thereof) should have been the nail in the coffin.
There were several small buildings with four rooms, each containing two wooden bunk beds and a small bathroom. These rooms weren't too bad, but there were not enough of them to hold everyone competing in and/or watching the event. Enter the stables.
No, really, the stables. The ranch owners had hastily converted a former horse stable into a makeshift bunkhouse with the addition of rickety metal bunk beds and cheap showers/toilets. This was apparently done for a Canadian military unit which had used the ranch for training at some point in the past, and the ranch owners and Competition Dynamics folks apparently felt it was suitable for human habitation.
My teammate and I spent the first night - the one previous to the event - in the stables along with thirty or forty other lucky individuals. Of the approximately ten showers that had been installed, half did not have shower curtains, and only two actually worked. Males and females (including the underage sister of one competitor) were assigned to this lodging without any thought of propriety. Dozens of small beetles and flying ants crawled all over my body while I slept, and my stablemates didn't have any better luck. I had a top bunk; lodging was so overcrowded that some guys had to sleep on mattresses on the floor. According to those on the other side of the stable, loose roof or siding panels blew in the wind and banged on the structure all night.
To put it bluntly, the housing was unacceptable by even military standards. I have spent plenty of time sleeping in an open squad bay and did not get out of the military so that I could pay my hard-earned money to do it all over again. Yes, we had to pay for this wonderful housing - $80 per night for competitors, and $100 per night for spectators. For a team of two, that was $320 for two nights. We were "strongly urged" to stay on site due to the remote location of the ranch - and no doubt due to the fact that someone was making money hand over fist by bending participants, spectators, and sponsors over and violating their wallets in a most unkind manner.
It is literally cheaper for two people to stay (mid-week) at the Venetian in Las Vegas than it was to sleep on the floor during the Sniper Adventure Challenge. Needless to say, the Competition Dynamics staff and family members did not sleep in the stables. Nor did the "lodging" page on the ranch website accurately reflect these conditions.
Problem #2 - Shooting Was Irrelevant Which Is Ironic Because It's Called The Sniper Adventure Challenge
In last year's Sniper Adventure Challenge, shooting played a minor role in overall scores - for the top few teams (including ours), shooting scores were approximately 5% of overall scores. While I fully understood that Competition Dynamics puts on other shooting matches which have a major focus on shooting and a minor focus on "a physical element," shooting was such a minor part of last year's event that it had almost no effect on overall scores. Finding and reaching two out of fourteen mandatory land navigation checkpoints added more to almost every team's score than every shot they took during the match.
As it was the first year of the event, I figured that I would hold my criticism and let the match organizers fix what seemed to be an obvious problem - that something called the Sniper Adventure Challenge could be won by a team that brought a large stick instead of a rifle - on their own instead of telling them what I thought. I figured that shooting would still be a minor part of the event, albeit a less minor one, so while we did focus on shooting, we also worked on land nav and physical fitness.
But that didn't change. In fact, some teams didn't even get to shoot their rifles this year. Of those that did, many didn't make any hits.
Part of the problem was the layout of the course, which I should address next.
Problem #3 - The Course Layout Was Dumb, Which Competition Dynamics Would Have Known If They Had Bothered To Run The Course
Okay, that was a really long title, but it's pretty accurate.
Last year's event was a tough course. This year's event was a stupid course. What's the difference?
Well, in both events, competitors were given coordinates for mandatory and bonus land navigation checkpoints. Mandatory checkpoints had to be taken in order, from 1 to 14 last year and 1 to 18 this year. Bonus checkpoints could be taken in any order.
However, last year's land navigation course (which still managed to cause perhaps dozens of teams to get lost during the night) involved the use of ten meter grid coordinates - or coordinates enabling a person to narrow down their destination to a ten meter by ten meter box. This year, hundred meter grid coordinates were used. In addition, checkpoints themselves went from 3-4 foot tall 2x4s marked with reflective tape (last year) to 1-2 foot tall 2x2s with essentially no markings whatsoever (this year).
So instead of plotting an exact line from one hundred-square-meter point to another, finding a checkpoint, and moving on, teams navigated from one ten-thousand-square-meter area to another ten-thousand-square-meter area, upon which time they engaged in a scavenger hunt for the checkpoint stake. As the entire ranch was covered in small agaves, protruding from which was a straight "branch" approximately one to two feet tall...well, like I said, it was a scavenger hunt. Land navigation instructors/experts which I and others consulted said it was "unbelievable" for CD to have used hundred-meter grids for this purpose. It is likely that they simply marked each coordinate with their GPS units, which may have only offered these shorter grid coordinates, and called it good. I don't know, nor do I care. The results mattered, and the results sucked.
I should note that Paul and I (mostly Paul) located 5 bonus checkpoints, tying two other teams for the second-most bonus checkpoints. Side note: last year bonus checkpoints were the decisive factor in determining how well a team placed; this year each bonus checkpoint was worth nearly twice as much as last year. It should go without saying that the team which collected the most bonus checkpoints this year (13, or 8 more than we found) won the race.
But that's not all!
Last year, there were, as I said, fourteen mandatory checkpoints. All but one of these checkpoints involved a challenge or shooting stage, and the one that didn't was along the way to another checkpoint, so it really didn't cause problems. But what I'm getting at is that last year's event was fun. You knew that there was a point to walking through the night - that when you reached your destination, you would be doing something to test yourself and to try to best the competition.
This year, there were seven unmanned/"no challenge" checkpoints out of 18 total. Ten, if you consider the unreasonable "drop dead" times. What do I mean by that?
Teams were told that they had to make it to checkpoint 7 by midnight, checkpoint 8 by 0200, and checkpoint 11 by dawn in order to participate in the challenge, otherwise they could collect the land nav points but not the challenge points. As it turns out, they had screwed up the written drop dead time for checkpoint 7, which closed at 0200. Seven teams out of twenty-five (including ours) made it to checkpoint 7 and completed the challenge before 0200. Four teams (not including ours) made it to checkpoint 8 before 0200. Checkpoint 11? Wait for it...wait for it...exactly zero teams made it to checkpoint 11 before dawn.
Why did teams need to reach checkpoint 11 before dawn? Because it was a stage (reportedly) involving shooting and driving using night vision optics. Who provided the night vision optics? MOD Armory. Did this company pay to sponsor the stage? Yes. Did they therefore pay to sit in the desert all night, waiting to demo products to people who would never show up? Yes. They might as well have gone to Roswell and demoed NVGs to people at the UFO convention as "alien detection goggles." MOD Armory would have received far more return on their investment.
There were teams which were physical fitness and land navigation studs - the first and second place teams, for example - they did not manage to reach checkpoint 11 in time. There were teams composed of studs who skipped several mandatory checkpoints, saving themselves many miles of travel - even they didn't manage to reach checkpoint 11 on time!
It was absolutely ludicrous for Competition Dynamics to have set the stage locations and drop dead times where and when they did. Had they tested the course themselves, allowing for slow and fast teams to reach sponsored checkpoints during appropriate times, providing coordinates which are commensurate with on-foot navigation to very specific points, etc - they would have had a much better event. Am I saying that everyone should be able to finish? Absolutely not. But how did they get it so right last year (20% of teams finished the whole course), and screw it up so bad this year?
But it wasn't over yet...
Problem #4 - TOMCARs Suck And So Did The Event Layout But I Already Mentioned That
Last year, vehicle support was provided by Armor Works. They brought modified Polaris Razors, which functioned.
This year, vehicle support was provided by TOMCAR. They brought TOMCARs, which did not function.
Rather, they apparently functioned when they felt like it. This reportedly caused issues with the planned nighttime stage (not that it mattered, since no one made it there on time anyway) and also with reaching teams in need of assistance.
Many teams were in need of medical assistance or evacuation within eight hours of the start of the race. Due to the layout of the course, many areas were essentially unreachable by vehicle within a reasonable period of time, and several teams in need of serious and immediate assistance had to "self-evacuate" for several miles before they could reach help. I keep using the phrase "last year," but here goes - last year we were told that the Razors could bring a medical team to any point on the course within 30 to 45 minutes. This year, teams literally waited for hours, even after calling for non-life threatening assistance via SPOT tracker. We had CD-issued radios, but they were essentially useless over most of the course.
This was in part due to the span of the course, which was spread out over a much larger area than necessary. It would have been possible to condense the overall area of the course, reducing medical access times and improving radio reception, without compromising the "30-40 mile" length.
And yet Competition Dynamics thanked TOMCAR on Facebook. Not in a satirical yet truthful way, such as, "Thanks for taking a dump when we needed you," either.
Problem #5 - Efforts To Fix Problems With Last Year's Scoring Only Made Things Worse, And Oh By The Way, The Course Layout Was Dumb
The one thing most people seemed to complain about last year (other than shooting being irrelevant) was that several teams which skipped mandatory checkpoints still managed to score higher than teams which completed the entire course.
In addition to the concept of skipping a "mandatory" checkpoint and still "finishing" a race being bothersome, teams which cut a few mandatory checkpoints off their route saved themselves from quite literally hiking to the top of a mountain. Did this make it easier for them to shoot, having rested at night instead of hiking, shooting, crossing rope bridges, etc? I'd say so, and so did many other people.
So this year, Competition Dynamics instituted a 200 point penalty for each skipped checkpoint. Keep in mind that reaching a mandatory checkpoint was worth 100 points, so if a team hit 10 mandatory checkpoints and skipped the other 8, they would "finish" the race with -600 points.
However, teams which hit every checkpoint but did not finish the entire course - either due to not crossing the finish line before the event end time or because they quit or left the race due to medical reasons - did not receive penalties for not crossing the finish line or not "finishing" the race. This upset teams which "crossed the finish line" in a symbolic manner by not actually completing the entire course, because they placed behind teams which dropped out of the race towards the beginning.
In Competition Dynamics' defense, this was not a "choose your own adventure" event. Penalties for skipping mandatory checkpoints were clearly stated in the event briefing and in the written guidelines handed out the night before. Scores from last year were available for over eleven months, and during that event, the highest place for a team which skipped checkpoints was 6th - that was before penalties were introduced this year! How any team could have decided that skipping checkpoints was a good idea, whether part of a "strategy" or not, is simply beyond me. Logically speaking, they did not finish the entire race.
That said, if a team doesn't complete all of the mandatory checkpoints in time, especially if they drop out early, they probably shouldn't be considered in the standings, because...well, because they didn't finish.
Unfortunately, no team finished the entire course before the deadline. One team (the winning team) reportedly would have finished on time, had they not forgotten to stamp their scorecard at the last mandatory checkpoint - they went back to get the stamp and were heading to the finish line when the deadline hit.
Compare this with last year, when seven out of thirty-five, or twenty percent of teams managed to complete the entire course before the deadline. Why such a difference between events?
The course layout this year placed mandatory checkpoints with challenges along major roads, and unmanned mandatory checkpoints at random points in the desert. Last year, mandatory checkpoints with challenges were located not only along roads in low points, but also at the tops of the highest mountains in the area.
This year, teams which looked at the whole course and decided from the beginning that it was too difficult simply had to walk along a road for approximately twenty-five to thirty miles in order to complete "most" of the course; teams which wanted to hit every mandatory checkpoint had to divert from the road numerous times and cross hills covered in small sharp rocks. Considering that the event started out stupid (the 100lbs-of-rocks-duffel-bag-carry was replaced with digging a shallow hole? What?) and only got dumber as teams walked for hours across the desert only to get a check in a box, I don't really blame certain teams for deciding halfway that hitting every mandatory checkpoint was a waste of time.
In summary, teams were penalized for skipping checkpoints, but the course was highly unrealistic (evidence: no one reached MCP11 by dawn and no one finished the course on time) and incentivised skipping checkpoints.
All of that said, the teams which won the race and took second place absolutely deserved their positions; I would venture to say that the third place team deserved their place as well, even though they dropped out not long before the end time. We were in fourth place, which I am still somewhat uncomfortable with. However, we traveled similar distances compared to some of the teams which "finished" - plus we encountered greater elevation changes. So I'm not ashamed of our performance, and according to CD we did deserve our fourth-place finish as we stuck pretty close to the intended route of travel for 31 miles. However, it probably would have been better to limit prizes and placement to the top three teams and tell everyone else that they didn't rate (for various reasons).
This was an event that very few people could be happy with, and all of the problems may be traced back to Competition Dynamics. I let a few minor things slide last year because it was their first time running the event, although they do have years of experience running other shooting events. However many minor problems there may have been last year, it was fun and challenging in an intelligent way. This year, the cost of the event and lodging was nearly double the cost of the event last year, but it wasn't fun at all, just stupid. My friends and I don't need to pay over $900 to walk around in the desert for a while; we are able to do that at home, and I think many other people might feel the same way.
Competition Dynamics bills their products as "WORLD CLASS EXTREME PRACTICAL SHOOTING EVENTS RUN BY PROFESSIONALS." The 2013 Sniper Adventure Challenge was anything but "world class." It was in almost every regard an example of how to not run such an event. Competition Dynamics may have unwittingly created a market for an event which they cannot (or will not) deliver, allowing a potential competitor to step up and offer their own take on the "shooting adventure race." Only time will tell which approach is superior.
I have been reloading for as long as I have been seriously involved in the firearm world, and I have over time gravitated to using three brands of projectiles for rifle shooting: Barnes, Berger, and Sierra. I like Barnes' Match Burners because they're priced lower than the competition, but even so they shoot very well, allowing me to shoot more often with the same amount of money. I like Sierra MatchKings because they're available in a million flavors and I am able to find them almost everywhere I go - plus they're used in a lot of factory ammo, allowing me to play the fun game of "with the same bullet, are my handloads better than factory ammo?"
Bergers? I like Berger bullets because they shoot insanely well, but I really like Berger as a company because they are all about sharing data. Berger not only provides sectional density, G1 and G7 ballistic coefficients, and other data for every bullet they make in one handy document, but they also provide form factors - and their chief ballistician, Brian Litz, writes easy-to-understand explanations of why these things are important.
I highly recommend digesting Mr. Litz's articles, but for those who don't have the time or inclination to read them (or some other articles which also cover the topics at hand), I will summarize ballistic coefficients and form factors as best I can here.
Ballistic coefficient (BC) - a number representing the relative ability of a projectile to maintain velocity. Relative to what? A "standard projectile."
Standard projectile - a defined projectile shape used as a benchmark for velocity retention, against which all projectiles of a similar shape may be compared.
Sectional density (SD) - ratio of mass to frontal area. While often used to determine the terminal effectiveness of a projectile, in external ballistics terms, this number is used along with form factor to calculate ballistic coefficient.
Form factor - the drag of a projectile divided by the drag of a standard projectile. This number represents the efficiency of the shape of the projectile regardless of its weight.
Why is form factor important? Consider two bullets with the same ballistic coefficient but different weights. The heavier bullet would only be as efficient (that is, have as flat a trajectory) as the lighter bullet if it was pushed to the same velocity. But since we can generally make lighter bullets go faster, the lighter bullet with the same ballistic coefficient will have a better trajectory. This is form factor: the heavier bullet might be 3% less efficient than a standard projectile, or a form factor of 1.030, while the lighter bullet might be 5% more efficient than the standard projectile, or .950. Why shouldn't a shooter try to calculate the relative efficiency of projectiles and use that as part of their decision making process? Beats me. For a list of some Berger form factors, click here.
Earlier I mentioned BCs, and many of you may have seen these numbers advertised on boxes of bullets or even loaded ammunition. Shooters use ballistic coefficients to estimate trajectories for their chosen ammunition. Most bullet manufacturers use a G1 BC, which references a standard projectile which is (literally) straight out of the 1800s.
Since a ballistic coefficient is a comparison of the ability of a projectile to maintain velocity compared to a standard projectile, the use of a relatively inefficient standard projectile shape will give the false impression of a numerically higher ballistic coefficient. Put simply, the G1 standard projectile's inefficient shape causes it to shed velocity faster than other shapes. However, most rifle bullets are so much smaller than the standard projectile diameter and weight of one inch/one pound that even a more efficient shape cannot make up for being smaller. So the ballistic coefficient of essentially every rifle bullet will be much less than 1, where 1 is equal to the BC of the standard projectile.
The basic problem with using a G1 BC is that the inefficient but large standard projectile and the efficient but small projectile we're trying to estimate a trajectory for are not going to fly through the air in the exact same manner at all velocities. Because they're different shapes, they will behave differently. The two ways to address this are to 1) attempt to calculate multiple ballistic coefficients for a velocity "window," inside which the efficient projectile should behave kinda-sorta in a manner relative to the G1 projectile at that same velocity (which is what Sierra does), and 2) use a standard projectile shape which is similar to the projectile we're trying to estimate a trajectory for (which is what Berger does). The standard projectile which most closely approximates modern boat-tail projectiles is called G7.
Both approaches work when used correctly. The problem with Sierra's approach is that it is unwieldy and requires entering multiple sets of data to retrieve a single result. The problem with Berger's approach is that because the G7 standard projectile is larger than an average rifle bullet and theoretically about as efficient, the resulting "G7" ballistic coefficient will be numerically lower than a "G1" BC for the same bullet.
Put simply, a G7 BC looks unimpressive when one has only seen G1 BCs. The payoff, though, is a much more accurate trajectory than using a single G1 BC, and simpler calculations than when using multiple G1 BCs. From an objective standpoint, I see little reason but marketing to continue selling boat-tail projectiles under a G1 BC calculation.
So why do I like Berger bullets? Because not only do they market their products in the most transparent manner possible, but they use their research in these areas to create the most precise, accurate, and efficient bullets possible. Perhaps most importantly, when they make an error, they quite graciously announce it and what they're doing to fix the problem.
I wish more companies in this industry were like Berger.