• Which water technology will save California from its long, dry death?

    Solar receivers at the WaterFX demonstration plant in the Panoche Water and Drainage District in California.

    Water is complicated—especially in the West.

    For years, willful ignorance has prevailed. Infrastructure projects allowed water to flow in places it would not otherwise be found. Seemingly plentiful supplies allowed agriculture to flourish. California raced to become a top producer of fruits, vegetables, nuts, and wine. Water gushed from the taps, kept cheap by the sheer political willpower invested in sustaining a blissful mirage of water abundance.

    Sure, we've experienced water hardship in the past. A 1976-77 drought in California popularized a proverb that gets stuck on repeat in my head, like an annoying earworm: "If it's yellow, let it mellow; if it's brown, flush it down."

    A drought in the late 1980's solidified water conservation habits for many in the state.

    Over time, though, we forgot these difficulties, because it's easier to exist in the belief that our water will always be there when we need it. So when history repeated itself, the California of 2014 was surprised to find serious water problem on its hands.

    Suddenly, it was gone. Wells went dry. People began stealing water. Extreme drought conditions, dwindling water reserves, and shrinking aquifers all influenced the decision from California's Governor Brown to further limit water use by everyone.

    The mandate requires Californians to reduce water use by 20%. Additionally, cities may now fine water-wasters, and some locales are promoting the implementation of drought-tolerant yards. Will this effort solve our water shortage woes?

    The short answer is no. Here are numbers to help you understand why:

    California manages about 40 million acre feet (13 trillion gallons) of freshwater a year, with urban use contributing to only around one-fifth of the total (about 9.1 million acre feet per year). A 20% reduction in urban yearly use will affect the total by about 1.8 million acre feet across the state. The Pacific Institute estimates that residential urban use could actually be made more efficient by a whopping 40-60%, and urban business and industries could improve by 30-60%. Still, if we were to enable a 60% reduction, it would only help by freeing up 5.5 million acre feet of the grand total.

    According to NASA, though, the state has a shortfall of almost 34 million acre feet of water.

    How—after we finish our praying and dancing for rain—can we possibly come up with the difference? We will have to produce it ourselves.

    I don't mean that we are going to have to produce water de novo, like magic. This is all about the proper applications of science, through engineering, to convert unusable water into a fresh supply.

    There are two major areas of technology currently in development and begging for attention from innovators: desalination and water reclamation. Both involve the separation of freshwater from the things that make it unsuitable for human uses, such as salts, pollutants and human waste.

    Desalination is typically thought of as the refinement of saltwater by plants built in coastal areas, whereas reclamation deals with purifying water that was fouled by human or industrial use. The two main issues affecting their use and implementation are cost and environmental concerns.

    When it comes to desalination, those plants that are in existence use a well-established, yet fairly old process, called reverse osmosis, which forces water across a permeable filtration membrane. The lion's share of the cost lies in the energy, usually supplied in the form of expensive electricity from the power grid, required to move the water across that membrane. Additionally, membranes get dirty, require cleaning with chemicals, and need replacing on a regular basis, which adds substantially to the cost of running the plants.

    Then there is the salty, chemical-filled brine after-product of desalination to consider. Where does it go? Often it is pumped right back into the coastal ecosystem—where it can increase local salt concentrations far beyond the normal range—but there are other brine disposal options available that depend on specifics of a plant's location. Regulations are also in place to ascertain that any proposed desalination plant properly assesses effects on marine life and the environment.

    In California's more recent history, investment in desalination plants has faced an uphill battle resulting from generally low water costs compared to the cost of water provided by such a facility, periodic booms in water supply, and legitimate concerns over environmental impacts.

    Now that water costs are now skyrocketing in many parts of the state, and scientists are predicting a mega-drought that could last 50 years or more, improved desalination technology is once again a hot topic.

    All of these factors could influence politicians and investors to favor desalination in the short-term.

    According to Kristina Donnelly, a research associate with the Berkeley, CA-based Pacific Institute Water Program, desalination is only economically viable during times of extreme scarcity. Desalination is not used primarily for agriculture, which uses the bulk of California's water. She said, "Some small facilities are processing groundwater for agriculture, but the large plants are too expensive."

    Donnelly went on to explain that farmers pay $100-200 per acre foot whereas urban areas are charged upwards of $1000-2000 for the same amount of water. She emphasized that residential districts sometimes act out of a sense of fear and urgency during droughts when they would benefit more from focusing on conservation and metering. They sign long-term contracts with desalination providers that lock them into paying for the capital costs of water that they no longer need when it starts raining again. There are examples of desalination plants being built during droughts only to be shuttered a few years later.  

    The cost of desalination is currently around $2000 per acre foot, which is at the extreme high end of what people pay right now, but not completely unreasonable as water becomes more and more limited. Advancing technologies could lead to more efficient and affordable desalination processes, but at this point none are marketable or commercial. According to Ms. Donnelly, "Reverse osmosis is the industry standard, and best option." Until technologies advance beyond the demonstration stage, they are as good as snake oil.

    One company, WaterFX, aims to provide commercial scale desalination of agricultural wastewater, and relies on something other than reverse osmosis. Their solar thermal distillation technology is a new twist on an old technique that elegantly brings desalination and water reclamation together. Using the heat from the sun, WaterFX has been able to increase the efficiency of what is essentially a concentrated solar still by implementing multiple-effect distillation, or MED.

    In MED, there are multiple stages, each called an "effect", in which water passing through is boiled. The trick, with what WaterFX calls their Aqua4 technology ,is that they use light from the sun to drive the distillation:

    "The system is a concentrated solar still that uses large solar arrays to capture solar thermal energy from the sun. The sun heats mineral oil that then flows to the Multi-effect Distillation system (MED) that evaporates freshwater from the source water."

    The Aqua4™ technology from the Water FX demonstration plant in California.
    The Aqua4™ technology from the Water FX demonstration plant in California

    Dr. Matthew Stuber, co-founder and director of process systems engineering for WaterFX, told me that irrigation in areas with poor soil drainage leads to the accumulation of salts in the soil—potentially bad for crop yields. The solution, for many farmers, is to install drainage pipes underground that divert the water into evaporation ponds. It works to keep the soils productive, but requires large areas of land to go fallow. These ponds are actually a big problem for farmers and water management departments because of the concentrated salt build-up that can occur.

    Based on results from their demonstration plant, Dr. Stuber says that they are able to isolate the salts through the distillation process into a concentrated form that can leave the environment, and be dealt with in other ways, including making gypsum. The process is able to reclaim 93% of the drainage water that enters the system as freshwater, while simultaneously producing the brine "co-product".

    WaterFX just announced that it will be expanding its pilot demonstration plant near Firebaugh, CA in the Panoche Water District into a 35 acre installation with the potential to grow to 70 acres. They say that this HydroRevolution plant will "ultimately be able to generate up to 5,000 acre-feet of water per year, enough water for 10,000 homes or 2,000 acres of cropland."

    The company chairman, Aaron Mandell, estimates that with expected advances in their technology over the next five years, a two-square mile area of land will be sufficient to provide 100 million gallons of water a day at a price of just $450 per acre foot. That's one-quarter of the price of standard desalination.

    And, this is just the beginning for HydroRevolution. It's estimated that there is nearly one million acre feet of drainage water in California's Central Valley. And, according to Dr. Stuber, WaterFX is looking at other locations for implementation. He described the challenges that island states in places like the Caribbean face with drought, rising seas, and diminishing fresh groundwater. As the stores are reduced, saltwater encroaches, resulting in more brackish drinking water. Another potential customer: data centers, which require large amounts of clean water in order to cool processors, and would benefit from efficient recycling.

    Says Stuber, "It's all a new design problem… Because our energy source is dependent on the local climate, each location gets a different design and a slightly different solution.  And, that's something that we excel at is giving optimal solutions for the location, for the kind of water, and making sure that our emphasis is always on that renewable component. And, that the activities that we are doing are not contributing to the environmental problem that we are facing."

    WaterFX also claims to support an open-source philosophy. They are actively looking for people to contribute to the research in this area, to iterate upon it, and to push development for higher efficiency systems forward even faster. It's a business model that might be hugely successful when viewed through the lens of our environment.

    In the United States, a concerted effort is mounting to push innovation in all areas of water technology. Two organizations, the Water Environment Federation (WEF) and the Water Environment Research Foundation (WERF) have created a program called LIFT, the Leaders Innovation Forum for Technology, which works to get water tech ideas from conception to commercialization.

    Along those lines, California created its Innovation Hub (iHub) initiative to get more technology to market across the state. Additionally, a challenge was initiated by the Alliance for Coastal Technologies and a coalition of 10 government agencies, non-profits, and universities to develop a low-cost, accurate, and easy-to-use water nutrient sensor: nitrate and phosphate pollution are a $2 Billion problem for freshwater management.

    The business environment seems ripe for new ideas to increase efficiency and reduce water-related costs.

    Even if amazing technologies are developed, though, getting them adopted by water agencies is another problem entirely.

    According to a fascinating document published by the Partnership on Technology Innovation and the Environment, "There is a shortage of skilled employees in the wastewater sector and this is worsening as the current generation of utility experts retires. Most universities training wastewater facility managers are not teaching with systems thinking, and small- and medium-sized municipalities lack the resources to hire professionals with the expertise to look beyond the day-to-day repairs to strategic upgrades and how to finance them."

    I'd love to say that there's going to be a technological silver bullet for water. But water is complicated. WaterFX and other developing technologies for sensors, forward osmosis and nanotube membranes could be an integral part of a much bigger water strategy, but none show a clear path out of the dry woods.

    It's going to take a lot of money. It's going to take cooperation on a scale the west has never seen.

  • Frank Drake thinks it's silly to send messages to ET

    Making contact with aliens: the subject of many a sci-fi story, and a variety of imagined outcomes. Though no one knows what will happen if we encounter intelligent extra-terrestrial life, scientists are dividd on how we should proceed.

    SETI, the Search for Extra-Terrestrial Intelligence, has been searching for signals from said ETs for many years with no positive results. Of course, there have been interesting signals, but nothing specifically indicative of intelligence.

    Scientists from SETI are turning up the volume on a debate that has been raging for several years over whether we should start actively transmitting messages into outer space rather than continuing to passively scan the skies while only leaking weak radiation from our surface activities on the planet. In a press conference at the annual meeting of the American Association for the Advancement of Science in San Jose this week, Douglas Vakoch presented the question, and stated that beginning to transmit in an active, directed fashion would be part of humanity "growing up". He also wondered if it would be the approach to let us actually "make contact" with aliens.
    (more…)

  • The day I met a creationist at the science conference

    1024px-Dakota_Fm_Dinosaur_Ridge

    The last place you expect to meet a creationist is at the annual American Geophysical Union conference. I don't know how I got so lucky.

    Yesterday morning, I wandered through the posters presented at the event, with a thought to translating their scientific jargon into something interesting to read. Since my background is biological, I thought that discipline would be the obvious place to start—in particular, something about microbes doing interesting things under the surface of the Earth.

    A title caught my eye. It was one of the first posters in the aisle, so prominent to the casual passerby:

    A COMPARISON OF δ13C & pMC VALUES for TEN CRETACEOUS-JURASSIC DINOSAUR BONES from TEXAS to ALASKA USA, CHINA AND EUROPE WITH THAT OF COAL AND DIAMONDS PRESENTED IN THE 2003 AGU MEETING

    Dinosaur bones and diamonds! My brain, attracted to both old and shiny objects, sent me in closer to investigate. As I was trying to interpret the densely-packed board of letters, numbers, and figures printed in incredibly tiny print, I was approached by a slight, elderly man in glasses. A name badge declaring him to be Hugh Miller, the first author on the poster.

    He asked if I had any questions. I asked if he could just give me a quick summary of the work. He talked about performing mass spectrometry on samples of various dinosaur bones that produced age estimates ranging from 15,000 to 50,000 years. My spidey-sense tingled. I peered over his shoulder, searching for bullet points to figure out what was going on here.

    That's when I read it: "humans, neanderthals, and dinosaurs existed together."

    The poster was challenging radiocarbon dating using Carbon-14 (C-14) isotopes. It suggested that their data, comparing coal, diamond, wood, and dinosaur bones, were sufficient to throw all of geology into question. Namely, that based on their data, the age estimate of the dinosaurs was off by some 2000x.

    Moreover, humanity must be increasingly concerned about asteroid strikes to the Earth, because that age estimate error would influence our estimate of the size of the whole universe (since we look at the size of the universe through the lens of time), which would mean that everything in our solar system is more densely packed. Hence, we are more likely to be hit by asteroids because they are so much closer to us than thought.

    This makes about as much sense as the Indiana Jones movie with ancient alien archaeologists.

    credit: Paramount Pictures

    credit: Paramount Pictures

    I don't know if Hugh saw the quizzical look in my eyes, but when he was interrupted by someone asking for something, I quickly backed away.

    Now, here's the thing about Carbon-14 dating. This isotope has a very short half-life (the time necessary for the element to reduce in mass by half) of only 5730 years. Since it decays so quickly, it is useless for dating objects more than about 40-50,000 years old. The background levels of C-14 radiation in the laboratory have to be compensated for.

    According to the NCSE website:

    "This radiation cannot be totally eliminated from the laboratory, so one could probably get a "radiocarbon" date of fifty thousand years from a pure carbon-free piece of tin."

    And, this is pretty much what the poster presented.

    When looking at fossils preserved in sedimentary rock, the fossil itself can be dated, but often a technique called "bracketing" is used where the igneous rock on either side of the fossil is dated with radioactive isotopes that have half-lives on the order of millions of years. This give scientists a range of time in which the animal could have lived. The poster authors, Hugh included, were basing their attack on one technique in the geological toolkit, and disregarding all other evidence that would have undermined their conclusions.

    How did this abstract get past the selection process? I have no idea, but I hope that people at the conference were able to see that it was not science. It was an example of belief masquerading as scientific inquiry.

  • Adventurers can help science by sharing their data
    http://commons.wikimedia.org/wiki/File:K2_8611.jpg#mediaviewer/File:K2_8611.jpg

    "K2 8611" by Kogo – Own work. Licensed under GFDL via Wikimedia Commons

    By adding a little sampling to their adventures out in the wild, explorers in hard-to-reach locations could lend a big hand to scientific research.

    An organization called Adventurers and Scientists for Conservation hopes to bring the two professions together in the name of science.

    This week, I sat in on a session at the American Geophysical Union meeting in which the speakers discussed the merits of citizen science and the potential impact that explorers could make on scientific data collection.

    Many scientists are explorer and trek across the globe, but often they have responsibilities that keep them tied to the institutions where they work with limited opportunities to get into the field for data collection. If sampling techniques can be simplified and standardized so that anyone can learn how collect the necessary bits of rock, water, flora, etc. at particular sites, why not ask the people who are already out there to help out?

    Additionally, those out exploring are often on the front lines of witnessing changes to our planet, and are passionate about wanting to help in some way.

    Not all science can utilize the citizenry, but for those projects that can, this seems like an amazing resource on both sides of the equation.

  • Scientists track water locked hundreds of miles underground

    watercyclesummary

    The Earth is full of water. Not just lakes, river, streams, and oceans on the crustal surface, or even aquifers close to the surface—the planet literally holds water inside itself.

    Deep inside the mantle, where the temperature and pressure are so high you would think it impossible, viscous crystalline rocks potentially trap the equivalent of the Pacific Ocean.

    Last year, scientists found a diamond with the tiniest speck of an olivine mineral called ringwoodite in it that was 1.5 percent water by weight. Ringwoodite only exists at great depths, some 550-660 km beneath the surface of the Earth, where phase transitions alter the structure of olivine into something that is more capable of holding water.

    Convection within the mantle could conceivably bring water held in olivine back to the surface. In the case of the ringwoodite-containing diamond, the process was rapid and explosive, but it is more likely to be slow and gradual. The inner-Earth's water cycle is thought to take on the order of 250-500 million years.

    There are chemical processes at work around undersea vents and volcanoes by which water gets incorporated into rock in the Earth's crust. The crust is constantly moving, with separate plates jockeying for position, rubbing up against one another, and sometimes getting subsumed underneath each other.

    When one crustal plate dives beneath another, that's called subduction. This process is thought to take rocks, and the water held in them, down into the mantle.

    At about 100-150 km down, the rocks start to break down under the pressure and increasing temperature. Water gets released during the breakdown process, but it's not entirely efficient. A lot of water remains tied up in the minerals as they break apart, and recombine through chemical reactions. They head ever deeper.

    We know that ringwoodite can hold water, but it has been determined that below 660 km, ringwoodite transitions into yet another form of olivine called bridgmanite, which can't hold much water. However, seismic mapping experiments have detected areas of melt, melted material held within the crystalline solids that differ in their chemical composition, and which are possibly indicative of water, at depths of 760 km. This is 100 km deeper than water should be able to venture.

    So, how does the water get there? And, how is there still water deep down there if the cycle keeps taking the stuff back to the surface? What is the missing step?

    Wendy Panero, PhD

    Wendy Panero, PhD, an Assistant Professor at Ohio State University, has been addressing these questions with her graduate student, Jeff Pigott. Together, they created computer models of the lower mantle, and came up with an answer.

    Garnet. The burgundy-colored mineral is stable at depths beyond what ringwoodite can handle, and well into the lower mantle. It is possible that garnet could be a water-carrying missionary into the land of bridgmanite.

    If the mechanism is correct, it puts another link in the chain of Earth's inner water cycle, and when connected to the oceans and atmosphere, it puts the duration of a complete cycle on the order of billions of years. Additionally, it constrains the amount of water that could be contained within the Earth by specifying the minerals that are in the chain, and determining their respective contributions to the cycle. Whereas previous estimates have put the amount of water in the mantle at 1-3 times the amount of water on the surface, this study brings that quantity down to a single ocean. Regardless of the reduction, this is still substantial considering that all of the water could have originated from geochemical processes alone.

    The interior of our planet is something we can't touch, unless it spits itself out at us. Our technological abilities allow us to mimic it ever more precisely with each passing advancement. The lab Dr. Panero has created contains a piece of equipment called a diamond anvil cell, which squeezes minerals between two diamonds in order to apply immense amounts of pressure, and then fires a laser to bring up the heat to subterranean levels. Her lab just might have the burn marks to prove it. She also gets to take that diamond anvil cell to a synchrotron where she and a team of physicists fire high-energy x-rays at it.

    Dr. Panero is also a planetary scientist in addition to investigating our Earth's geochemical pathways, and she suggested that plate tectonics might be the key to Earth's abundance of water. The mantle probably plays an influential role in the amount of water that is in our oceans, and consequentially the amount of carbon that it can store. However, Panero mentioned that this finding raises more questions than it answers, as we know very little about the content of these melts at great depth, the other things they could carry within them, and exactly how they move through the mantle's circulation.

    If we want to get a better idea of our planet's formation and modern state, rather than simply considering the Goldilock's zone as the place where water can be liquid, we need to look at our and other planets with a broader eye toward what Panero called the "geochemical Goldilock's zone"… that place where all the chemical and physical processes on and in a planet can allow for the culmination of something like oceans and an atmosphere.

    Maybe we are even more lucky than we thought.

  • Mars burps methane. Scientists want to know why.
     Nasa's Curiosity rover found methane at about 1 part per billion in the atmosphere of Mars. That's 4,000 times less than in the air on our own planet. Gotta be all the farts. Photograph: Nasa/JPL-Caltech/MSSS/EPA


    Nasa's Curiosity rover found methane at about 1 part per billion in the atmosphere of Mars. That's 4,000 times less than in the air on our own planet. Gotta be all the farts. Photograph: Nasa/JPL-Caltech/MSSS/EPA

    NASA scientists reported results from the Mars Curiosity roving science lab at the American Geophysical Union to a packed room of press chomping at the bit for a big story. It turns out Mars has gas. It burps methane "sporadically, and episodically," according to Curiosity co-investigator, Sushil Atreya.

    (more…)

  • Humanity at night

    You might recall an image release in 2012 of the Earth's lights at night. Called Black Marble, the popular image was generated as a composite image from the best images by the taken over several months by NASA's Suomi-NPP Visual Infrared Imaging Radiometer Suite (VIIRS). The instrument, according to NASA's website, "detects light in a range of wavelengths from green to near-infrared and uses "smart" light sensors to observe dim signals such as city lights, auroras, wildfires, and reflected moonlight."

    earth_night_rotate_lrg

    Black Marble is a pretty image. It shows the light emitted by people, but it doesn't give us a sense of how or why we are using it. So, Miguel Roman and Eleanor Stokes set about creating a dynamic understanding of human behavior at night around the globe.

    They took over 3 years of Suomi-NPP VIIRS data over major urban areas in North America, the Caribbean, and the Middle East, and developed algorithms to get rid of view-obstructing clouds, correct terrain errors, correct for atmospheric effects, and to remove light contamination from the moon, fire, and stray light. Overall, they focused on daily changes in lighting at the country, city, and neighborhood scales during the holiday season as compared to the rest of the year.

    The result is visually dramatic. During the holidays our activity patterns change. This in turn changes the location of demand for energy services. For instance, in big metropolitan areas, lighting increases in predominantly suburban, residential areas. This is likely because people are leaving work earlier and going home to turn on lights. Also, McMansions require more light to illuminate with festive holiday lights. Interestingly, urban centers increase only slightly when compared to the suburbs, which is probably related to the fact that urban areas are more illuminated at night generally.

    Further, Roman suggests that the "EKG" of a city can be measured and analysed by looking at the daily changes over time. Creating that graph over time demonstrates that there is a definite cyclic change in light requirements during the holidays. And, while the Bay Area's signal is strong, there are many days that just could not be measured as a result of cloud cover.

    Some cities can't be measured at all during certain parts of the holiday because of snow, which creates something of a blind spot for VIIRS because of reflectance. Across the U.S., the team compiled data from 70 cities, and found a spike in lighting during the holidays.

    Conversely to much of the U.S., in Puerto Rico nighttime temps are often high and promote festivities late into the night. Also, it is a cultural tradition to keep the Xmas lights on through January 20th. These behaviors come through loud and clear in the light data.

    (more…)

  • Why is Arctic ice melting so fast?

    sea_ice_fraction_change_and_absorbed_solar_radiation_change

    Right now, it's cold in the Arctic. Days are dark, and ice grows to cover the dark sea. Come summer, lengthening days and warming temperatures will reverse that process. This is the ebb and flow of the Arctic, a natural cycle.

    However, over the past several decades we have seen summers melt more and more of the ice that forms during the cold winter months. As a result, more and more dark seawater is exposed to the light of day.

    NASA researchers, using several instruments on three separate satellites, has been collecting data for 15 years to find out why the ice is melting, and to be able to predict trends in future ice formation and melting. They reported on this data at the 2014 American Geophysical Union annual meeting, saying that 15 years worth is the absolute minimum amount of information needed for them to begin making long-term predictions. Climate trends, as opposed to weather trends, are averaged over 30 years, so they are about halfway there at this point in time.

    The project to observe the Arctic is part of NASA's Clouds and the Earths Radiant Energy Systems (CERES) mission. They measure the Earth's reflected solar radiation, emitted thermal infrared radiation, and all emitted and reflected radiation.

    The results so far indicate that the Arctic is absorbing energy from the sun five percent faster now during the summer months than it was when they first began monitoring in 2000. This is important because the rest of the Earth is still absorbing energy at pretty much the same rate.

    Put into energetic terms, this means each square meter of the Arctic Ocean is absorbing approximately 10 more watts of solar energy than everywhere else. Interestingly, this is not uniform, and is regionally specific. For instance, the Beaufort Sea has been measured at 50 watts per square meter.

    All this extra energy has an impact on sea ice melt. The Beaufort Sea is one of the more dramatic ice melt examples. And, the rate of ice loss in September in the Arctic overall is 13 percent per decade. Let me spell that out… the Arctic is absorbing more energy, the air temperature is warming, and the rate of ice melting is being multiplied over ten times each decade.

    So, why is this happening? It partially has to do with albedo, or reflectivity. Ice and snow reflect the sun's light and energy, while dark oceans absorb it. Less summer ice means that things are going to warm up faster, creating a feed-forward cycle that will potentially lead to even further warming and melting.

    Walt Meier discussed differences in the ice itself that contribute to this process. He said that young ice melts more easily than old ice due to surface features and salinity. This results in much more rapid melting each year, which exposes more old, thicker ice to the suns rays. Each year more old ice is lost only to be replaced during the winter with easily melting young ice. The Arctic has lost 1.4 million square kilometers of ice over the past 15 years.

    Young, thin ice makes the Arctic more vulnerable to further summer melting. Further Jennifer Kay, said that cloud cover is not related to the observed absorbed radiation.

     

  • The issue with arsenic

    fig3

    Arsenic. Hearing the word in America usually brings up black and white mental images of the film "Arsenic and Old Lace." Yet, it is not an old issue. People around the world are exposed to dangerous levels of arsenic in their water.

    Speaking today at the American Geophysical Union, Lex van Green discussed the issue of arsenic in well water in the Asian sub-continent, primarily in Bangladesh and Bihar, India. His concern is that even though people are aware of the problem, very little is being done to address it.

    People continue to drill new wells without determining their safety (safe levels are set at less than 10 micrograms per liter of water). Van Green's data, collected from 2012-13, show that 50% of people in the area assessed drink water containing arsenic at unsafe levels. However, 100% of people live near safe wells. Additionally, only about a third of people who become aware that their wells are contaminated switch to new wells by either drilling new wells or using their neighbor's wells.

    The difference between a safe well and an arsenic contaminated well is depth. Sedimentation by ancient arsenic rich waters along river deltas left layers of arsenic containing soil near the surface of the Earth. To get past the arsenic to clean aquifers, one has only to drill deeper than 100 meters down. However, wells are expensive to drill, and the deeper the well, the more expensive it will be.

    So, the problem in these areas where there is no infrastructure to deliver treated water to people boils down one of inequality. Only the wealthy are able to afford a deep enough well. And, although the government has initiated subsidy programs to help with the digging of wells, research suggests that the wells end up clustered within a small subset of villages where the inhabitants are wealthy and support the political party in power.

    In response, he and a team of researchers have developed affordable field test kits that can be used by private individuals or organizations to test wells for their arsenic content. The test results can be localized using GPS and smartphones. One of his collaborators is using Formhub, a system for mobile data collection, to improve data collection itself, quality control, and dissemination of information to impacted areas and individuals.

    It's already looking like technology will speed up the spread of awareness about arsenic levels in wells and the availability of tests. Van Green showed a couple of slides supporting this point with data collected in the past week that visually demonstrated that many more people are beginning to take advantage of the testing compared to the 2012-13 test period.

    This project, while important in the developing world where many millions more people are affected, could also be useful within the United States and Canada. The USGS has collected data on arsenic in water, and based on that information it is estimated that more than 40 million people in the U.S. are drinking arsenic laden water, many at levels well above 10 micrograms/liter.

    The test kits do contain strips laden with mercury bromide, so there are concerns about their use. No one wants a baby getting one of the little strips in their mouth. But, there is no reason to think that an affordable, at home solution to testing for arsenic shouldn't be implemented if safety concerns are properly addressed. The risk from ingesting arsenic is much more serious and pressing.

    So, do you know how safe your well is? You should, and you can.

  • Watching the climate-cholera connection from orbit

    Researchers reported at the American Geophysical Union annual meeting in San Francisco, CA on a satellite-based cholera outbreak prediction model in which specific environmental factors are correlated with epidemic cholera outbreaks. That's right, scientists are predicting disease from space.

    Cholera is a devastating diarrheal disease indicative of inadequate sanitation and health infrastructure. It is caused by a bacterium called Vibrio cholerae that exists naturally in the environment. That means we can't get rid of it. We are just unlucky that it affects us the way that it does, so we have to learn how to deal with it, which basically involves cleaning up water supply and infrastructure.

    Cholera infects some 3-5 million people a year globally, but is really only a concern in developing countries where water sanitation is not sufficient to remove or kill the bacteria. The dehydrating effects of cholera lead to death in 20% of those infected, and are the result of toxins released by the organism once it begins to colonize the small intestine.The disease doesn't really spread person-to-person, but outbreaks result from a complex combination of hydrological, climate, and societal factors.

    Cholera.jpg
    "Cholera" by Unknown – Le Petit Journal, Bibliothèque nationale de France. Licensed under Public Domain via Wikimedia Commons.

    The organism can cause disease on an endemic and epidemic basis. In some regions of the world, usually coastal areas, it exists chronically in the population and environment with disease outbreaks occurring on a fairly regular basis. For example, in Bangladesh, the disease experiences a resurgence twice a year; once in the spring as a result of cholera-containing coastal waters intruding into the river system during the dry season when flows are low, and again in the fall after the monsoons when flooding causes contamination of water resources.

    Epidemics, however, are much less predictable, occurring in a sudden, sporadic manner, and very often inland. Above average temperatures with above average precipitation combined with poor sanitation is the necessary pattern for epidemic outbreaks to get a start. In addition to the environmental parameters, any societal condition that devastates the water and community structure puts society at risk, but these are currently much more difficult to monitor.

    The Indian sub-continent is where cholera is thought to have originated. But, it has been re-emerging in Africa over the past 10-15 years, especially in coastal Africa. So, researchers are using over 40 years of data to get an idea of its potential development in various African regions.

    (more…)

  • Air pollution: the holy Hajj's health threat
    Dr. Azhar Siddique

    Dr. Azhar Siddique

    Reporting this week at the annual American Geophysical Union, scientists from UC Irvine discussed air quality results from samples taken during the 2012 and 2013 Hajj.

    The annual pilgrimage brings between 3-4 million people to the holy city of Mecca. Isobel Simpson, the lead researcher on this project, stated that, "The problem is that this intensifies the pollution that already exists. We measured among the highest concentrations [of smog-forming pollutants] our group has ever measured in urban areas – and we've studied 75 cities around the world in the past two decades."

    The worst locations were tunnels leading into the Grande Mosque where carbon monoxide levels were up to 300 times higher than baseline measurements, and pedestrians were often walking in large numbers alongside idling motor vehicles. Increases in carbon monoxide are linked to increased numbers of hospitalizations and deaths from heart failure. In addition to carbon monoxide, the team found elevated levels of benzene, black carbon, and other fine particulates that can affect lung function.

    The main culprit here that can be addressed by the Saudi government is a lack of regulation over automobiles, gasolines, and exhausts. Currently, there is a significant lack of public transportation in the area, and nearly everyone owns a car. Those cars don't have the devices that are currently required and built into vehicles in the Unites States to limit pollution.

    The easiest thing to fix would be separating pedestrians from cars in the tunnels, or at least spreading vehicles out more evenly among the tunnels leading to the Mosque, so that pollutants don't build up as much and negatively affect those walking in. Alternatively, improving ventilation in the tunnels would reduce deadly carbon monoxide build-up.

    In the meantime, if you are planning to join the Hajj in the near-future, consider bringing an air-filtering face mask.

  • Black hole power in a lightning bolt

    Have you ever been on a plane during a thunderstorm that experienced a direct lightning strike? While most commercial airliner will do their best to avoid thunderclouds delivering the wrath of the atmosphere, it's estimated that every plane in the U.S. is struck more than once per year.

    Large commercial planes are equipped to route the electrical current from a lightning strike so that it avoids sensitive electronics, and most passengers may not even realize that a plane has been struck when it does occur. However, the electrical current and loud clap of thunder are not all that is produced by a bolt of lightning. It's only within the past 20 years that research has confirmed that lightning also emits x-rays and gamma-rays.

    One source of x-rays is normal lightning, under normal atmospheric pressure that occurs near the ground. These x-rays are measured at strengths analogous to the energy range commonly emitted by CT scanning devices used in the health care industry. Then there are gamma-rays, high energy x-rays usually seen emitted by particle accelerators, exploding stars, and black holes, that have been detected as a continuous kind of glow within clouds. Additionally, a separate class of gamma-rays, called terrestrial gamma-ray flashes or TGFs, are even more powerful, brief bursts that can be seen by spacecraft and satellites in low earth orbit. TGFs are the most energetic phenomena on the planet, and are thought to be caused by intracloud lightning (lightning that occurs between clouds). TGFs appear all over the world where there are thunderstorms, but nobody understands exactly why or how.

    Scientists at the University of Alabama in Huntsville reported in a press conference today at the 2014 American Geophysical Union that they have been delving into the world of these high energy particles associated with the  bursts of light, and have concluded that these TGFs can be produced by any type of storm from the "garden variety" to more extreme events. That weaker or moderate strength storms would produce TGFs was totally unexpected based on earlier theories.

    While lightning strikes some 50 times per second around the globe, TGFs fire up to 1100 times per day based on data from NASA's FERMI Gamma-ray Burst Monitor (GBM). Separation of positive and negative charges between layer in the clouds leads to lighting, and sometimes when lighting does fire a surge of electrons is deflected upward. Those energetic electrons cause gamma ray flashes when they bump into other particles in the atmosphere. The TGFs that are measured by craft in low-earth orbit like NASA's FERMI GBM form between 7-9 miles high, but TGFs are likely to form at lower altitudes well. Although, because of attenuation in the atmosphere those at lower altitude are harder to detect from space, so the total numbers of TGFs are probably vastly underestimated.

    In order to address this issue, researchers attempted a gutsy experiment in which they actually flew an Airbus plane into a thunderstorm. The plane was equipped with a system called ILDAS to measure various properties of lightning strikes. The plane entered the thunderstorm to measure the radiation coming from the storm at approximately 10,000 feet, and over five hours experienced extreme turbulence and more than 20 direct lightning strikes. The data from the flight corroborates previous ground-based measurements linking gamma-rays and x-rays to lightning strikes. As long as there are enough brave pilots, this type of in-storm research will be one way that the scientists can get a better understanding of lower altitude TGFs.

    There are many questions remaining about how and why these high energy particles are produced by storms, and how they influence other atmospheric phenomena. What we do know now is that x-rays and gamma-rays that are detected can be tracked back to their source, the lightning, to help us learn more about the dangerous, dramatic, and mesmerizing flashes of light.

  • How to control mice with your mind

    Photo courtesy of Shutterstock

    Science fiction author Iain M. Banks wrote of a society in his Culture series in which individuals were physiologically enhanced or altered by implanting a consciously-controlled drug gland to deliver hormones or specific chemicals. Just by thinking about it, these glands provided people with "Calm" or enhanced "Focal" ability among other possibilities without side-effects. But, this type of medical technology has remained predominantly in the sci-fi realm until recently.

    Combining advancements in synthetic biology, optogenetics, and brain-computer-interfaces, researchers published a paper in Nature Communications this week that suggests "mind-genetic interfaces" could soon become available as treatment options.

    What does that mean? The scientists' system could one day allow people to use their own thought processes to control the expression of genes within their own bodies. When genes get expressed, they can be translated into proteins that serve certain cellular functions. So, if you have chronic pain, for instance, you could get your body to produce its intrinsic pain-relieving compounds just by thinking about it.

    The system described by the researchers is still in its infancy – so far, they have only used human brain activity to test their implantable gene-delivery device in mice, not in people – but, the process is fairly straight-forward.

    ncomms6392-f6

    In the experiment, a coin-sized device was implanted under the skin of mice. The basic device consisted of three components: 1) a receiver-coil capable of picking up magnetic signals and converting them into an electrical current, 2) a near infrared LED powered by that current, and 3) a chamber for housing genetically-modified cells that respond selectively to the LED.

    The mice were placed onto a magnetic field generator that received brain activity (EEG) signals via Bluetooth from people wearing NeuroSky's MindSet EEG headsets. The people responded to biofeedback, meditated, or concentrated on a game to change the level of their brain activity. The change in mental state altered the magnetic field being generated, and the amount of current created by the implanted device.

    A set threshold of current (e.g. meditation) triggered the LED to turn on and illuminate the cells in the chamber of the device. In the same way that you can see a bright flashlight through your hand, the researchers knew the light was turning on or off because they could actually see the illumination through the mouse's skin.

    The cells were specifically modified to express a human protein called SEAP whenever the LED turned on that would then diffuse across a membrane and into the mouse's bloodstream. The researchers were then able to determine that the levels of SEAP in the blood of the mice changed in accord with changes in the mental state of the people involved.

    It's important to note that the cells, while modified from normal cells, are contained and isolated within the culture chamber of the device. They never actually come into contact with other cells in the body. To date, optogenetic methods have relied on the injection of genetically-modified cells into body tissues or (in the case of mice) genetically-modifying whole organisms. Neither of these methods are suitable for real-world therapeutic purposes. It will be interesting to see how long the cells in these devices remain viable once implanted, and how easily this particular system can be put into practice for human needs.

    ncomms6392-f5

    When it comes down to it, this is still mostly gimmick. Whoo-hoo! We used EEGs to control another device… However, if someone can figure out how to couple this control system with the genes that influence and control disease and disorders in people, we might yet find ourselves inside one of Iain Banks' novels. It really gives new meaning to the idea that "it's all in your head".
    (more…)

  • Which is better: coffee or an electric shock to the head?

    I just poured my third cup of coffee today. It's mid-afternoon, so I do this knowing that it means I will lie in bed tonight staring at the ceiling entirely unable to sleep after I have fully annoyed my spouse with my inability to cease talking or stop moving. But, my energy is flagging, and I have words to write right now.

    Even though coffee and (more specifically) caffeine are known to produce unpleasant side-effects, the majority of the US citizenry is just like me. At least 80% of us consume caffeine daily. According to a study released in March of this year by the National Coffee Association of the USA, 61% of Americans drink coffee daily, and according to a review prepared for the FDA in 2012 based on data for the years 2003-2008, we consume about 300mg of caffeine per day on average.

    What does caffeine do for us? Why am I reaching for another cuppa?

    (more…)