Picture this fictional scenario: A customer purchases a new Humvee largely because the salesman says it’s a new hybrid version that can get up to 40 miles per gallon. After a few days of routine driving around town, the customer notices the gas gauge is on empty even though only 100 miles clicked off on the odometer. The salesman receives a phone call from the less-than-satisfied customer and responds by mailing a study that backs up the mileage claim. The study confirms that the new Humvee can get up to 40 mpg . . . going downhill on a 6% grade with a 50-mph tailwind. Technically, the salesman told the truth, but in reality, the customer was sold some snake oil to go along with his vehicle.
Thankfully, measuring gas mileage is easy to double check and reporting standards help prevent the above scenario from occurring. However if a similar story had been told regarding a deicer performance claim, would it have been perceived as fact or fiction?
Deicer performance claims associated with environmental impact, concrete damage and corrosion are pretty easy to find. These issues are all quite complex, involving many different facets and influenced by many different variables. Often, the key variables determining performance are difficult to control and measure in an accurate and reproducible manner. Human nature desires black-and-white answers to questions of concern. However with issues such as these, the most accurate answers are often not black and white but shades of gray.
For trusting individuals, a lack of recognition of the complexities associated with these issues may lead to acceptance of blanket statements without much question. If sales literature states that a product is “environmentally friendly” or “safe on concrete” then it must be true . . . especially if there is a nice-looking literature piece to go along with the claim. However, when someone reasonably proficient in the underlying science digs into the supporting data, it is often easy to find significant gaps, or even cases of deliberate deception.
This article will not answer the “fact or fiction” question posed above, but will hopefully reveal a few new insights into key deicer performance issues that will help users navigate the choppy waters of confusing and conflicting performance claims in their quest to get a dollar’s worth of performance for a dollar’s worth of product.
Facets of environmental impact include biodegradability and toxicity to aquatic species, soil-dwelling organisms, terrestrial plants and nonmammalian terrestrial species. Aquatic toxicity can be further broken down to include acute and chronic impact on fish, invertebrates and plants. “Toxicology” generally refers to impact on mammals from ingestion, skin or eye exposure, inhalation, carcinogenicity, mutagenicity and reproductive effects. When a deicing product is accompanied by a blanket claim of “environmentally friendly” or “non-toxic,” experience has shown that the claim is often based on one or two test results appearing favorable to the product’s position.
Supporting a blanket claim in a subject area as broad as these with one or two data points is technically inadequate, if not invalid. Worse yet, sometimes these supporting data points are misleading or deceptive. The most common way this deception occurs is when test results make “apples-to-oranges” comparisons. For example, a claim based on LD50 data (a measure of toxicity by ingestion) might state “Deicer X is less toxic than Deicer Y.” However, the claim fails to reveal that Deicer X is much more dilute than Deicer Y, and this fact is not readily known by most users. The claim also doesn’t reveal that the application rate for the dilute product must be higher than that for Deicer Y to achieve the same degree of deicing.
In the real world, environmental impact is determined by the combined effect of product concentration and application rate. While it might be technically true to state that Deicer X is less toxic than Deicer Y, it is misleading and deceptive relative to potential impact associated with actual application scenarios.
Rule of thumb: If active ingredient concentrations are not clearly specified when deicer performance data is being compared, the data should be disregarded because there is no way of knowing if a valid comparison is being made.
It also is important to understand that there is variability associated with test methods used to evaluate complex issues, even when objective, technically competent labs perform the analysis. Seldom can valid conclusions be drawn from any single test result or study. One must review results from a number of different sources to develop a true picture of performance. Similar to the aim point of a shotgun blast being the middle of the BB spread, deicer performance is best represented by the middle of the spread of objective results. For example, when I receive requests for aquatic toxicity information, my response includes a data table with 23 separate, independent test results, including the references to the studies that generated the data.
Concrete is a very complex material. Factors affecting the durability of concrete include air entrainment, water-to-cement ratio, type of admixtures, type of aggregate, depth of cover over steel, finishing practices, curing procedures and surface treatments. Not surprisingly, still another layer of complexity exists beneath this list. For example, not only is the quantity of entrained air important, but both the size and spatial distribution of the entrained air must be right to produce a durable concrete suitable for winter climates.
In 1956, the Portland Cement Association (PCA) published a bulletin titled “Studies of ‘Salt’ Scaling of Concrete.” In this study, the PCA scientists found that scaling was not specific to salt-based deicers, but also occurred when organic chemicals such as urea and ethyl alcohol were used as deicers. From this, they concluded, “The mechanism of surface scaling is primarily physical rather than chemical.” This physical process starts with the fact that concrete is a rigid, but porous, material. The porosity allows melt water from the deicing process to soak beneath the concrete surface. When the absorbed melt water re-freezes, it expands approximately 9%, pushing hard on the surrounding rigid structure. If the pressure exerted by the ice formation exceeds the strength of the concrete, the concrete will crack and crumble, which eventually appears as surface spalling or flaking. This mechanism is independent of the type of deicer used to create the melt water.
If this is true, why are there so many claims and counterclaims indicating one type of deicer is more “concrete friendly” than another? One reason is that there are different test methods for concrete, and different methods often yield conflicting results. Also, lab test results do not necessarily reflect real-world performance; a “best-to-worst” ranking in a lab test may not translate to the same “best-to-worst” ranking in the real world.
The relevance of lab test results to real-world performance must be proven, not assumed. The closer the testing comes to real-world conditions, the more likely the test results will be valid. One example of a concrete testing program that came very close to simulating real-world conditions is the PCA’s 43-year outdoor test program. This program tested 180 outdoor concrete specimens exposed to natural weathering, freeze-thaw cycling and application of deicers after snowfalls. It is hard to argue against the validity of these results, which showed minimal surface scaling effects after 43 years of NaCl and CaCl2 applications when good concrete practices were used.
Today, it is recognized that some deicers are known to attack concrete and some are suspected of doing so under certain conditions. There also is the issue of concrete damage associated with rebar corrosion, another complex topic that will not be addressed in this article. However, relative to surface scaling and applications of NaCl or CaCl2, the data doesn’t get more solid than that generated by the PCA over the last 50 years.
There are many different types of corrosion, including uniform, pitting, crevice, filiform, galvanic, intergranular, exfoliation and biological. The factors influencing corrosion associated with deicing chemicals include time, temperature, humidity, composition, pH, oxygen exposure, fluid movement, alloy type, surface area and trace impurities.
One surprising example of complexity is how trace impurities can affect corrosion. Small amounts of heavy metals, such as copper (>10 ppm) or mercury (>0.01 ppm) can cause substantial and rapid corrosion of aluminum. Another complicating factor is that corrosion rates are often nonlinear, meaning that they may start fast but taper off with time or start slow and increase with time. This presents a challenge when developing or selecting a corrosion test method, because the temptation is to minimize the time factor in the interest of generating immediate results.
In 2002, the Colorado Department of Transportation published a study titled “Corrosion Effects of Magnesium Chloride and Sodium Chloride on Automobile Components,” which was conducted in response to public concerns over perceived increases in aluminum and alloy steel corrosion associated with increased usage of corrosion-inhibited liquid deicers. In Phase II of the study, results from two different corrosion test methods were compared. The methods were NACE TM-01-09, as modified by the Pacific Northwest Snowfighters (PNS), and SAE J2334, developed by a consortium of North American automobile and steel producers. The NACE TM-01-09 method found NaCl to be more corrosive than MgCl2, while the SAE J2334 method found the opposite to be true. Which result better represents real-world corrosion performance? Among the conclusions made in the DOT’s study, the following two speak most directly to this question:
- Under high humidity condition (wet), MgCl2 tends to cause higher levels of corrosion; and under dip condition (immersion), NaCl results in higher level of corrosion; and
- In the real service situation, a vehicle may be exposed to a very specific and complicated condition, which may not be represented by either one of the testing methods.
Based on information provided in the report, it is interesting to note that the developers of SAE J2334 apparently went to great lengths to confirm that the method represented real-world performance. Their statistical analysis confirmed a strong correlation between lab results and five years of field testing.
Also, their high-tech analysis, including the use of scanning electron microscopy, energy dispersive X-ray mapping and other tools, indicated that corrosion products and morphology of attack associated with the lab test matched those associated with the on-vehicle tests. These analyses, combined with the fact that the results of the SAE J2334 tests seemed more consistent with public feedback, indicate that SAE J2334 may be more representative of real-world conditions than the NACE TM-01-09 test. However, this is opinion, not fact.
It is obvious that these deicer performance issues are filled with complexities, providing fertile ground for confusing and conflicting performance claims, perhaps even intentional deception. For stakeholders in the snow management industry, a vision of the future should include accelerated movement toward standardized, performance-based specifications measured by scientifically solid test methods administered by independent expertise centers whose objectivity, experience and technical excellence are unquestionable. With leadership from groups like PNS and the Transportation Research Board Winter Maintenance subcommittee, the industry has taken the first steps down this road, but the journey remains long, winding and full of potholes. The next leap forward will require stakeholders across North America to unify their efforts and partner with independent expertise centers to unravel the confusion and conflict associated with deicer performance issues. Until this happens, here are a few key points to remember when evaluating deicer performance claims:
- Beware of blanket claims—they are usually an oversimplification of a complex issue;
- Different test methods often produce conflicting results. It may be difficult to know which result is more applicable to real-world scenarios;
- Laboratory tests do not necessarily represent real-world performance. Correlation with real-world scenarios must be proven, not assumed;
- There is variability associated with every type of lab test. Multiple results from different sources must be reviewed to gain the proper perspective on performance; and
- Do not trust performance comparison data that doesn’t clearly specify the composition of the materials being compared and provide a reference for the test methodology.