The Challenge of the Planets, Part Two: High Energy


JPL's nuclear-electric "Space Cruiser" could reach Pluto in slightly more than three years.

JPL’s nuclear-electric “Space Cruiser” could reach Pluto from Earth orbit in slightly more than three years. NASA/Jet Propulsion Laboratory



President John F. Kennedy did not call only for a piloted lunar landing by 1970 in his 25 May 1961 “Urgent National Needs” speech before a joint session of the U.S. Congress. Among other things, he sought new money to expand Federal research into nuclear rockets, which, he explained, might one day enable Americans to reach to “the very ends of the solar system.”


Today we know that Americans can reach the “ends” of the Solar System without resort to nuclear propulsion (though a radioisotope system is handy for generating electricity in the dark beyond Jupiter, where solar arrays become impractical). When President Kennedy gave his speech, however, it was widely assumed that “high-energy” propulsion – which for most researchers meant nuclear rockets – would be desirable for round-trip journeys to Mars and Venus and a necessity for voyages beyond those next-door worlds.


In his speech, President Kennedy referred specifically to the joint NASA-Atomic Energy Commission (AEC) ROVER nuclear-thermal rocket program. As the term implies, a nuclear-thermal rocket employs a nuclear reactor to heat a propellant (typically liquid hydrogen) and expel it through a nozzle to generate thrust.

ROVER had begun under U.S. Air Force (USAF)/AEC auspices in 1955. USAF/AEC selected the Kiwi reactor design for nuclear-thermal rocket ground testing in 1957 – a major step forward for the U.S. nuclear rocket program – and USAF relinquished its role the program to NASA in 1958. As President Kennedy gave his speech, U.S. aerospace companies competed for the contract to build NERVA, the first flight-capable nuclear-thermal rocket engine.


Nuclear-thermal propulsion was not the only form of nuclear-powered high-energy propulsion. Another was nuclear-electric propulsion, which can take many forms. This post examines only the form known widely as ion drive.


An ion thruster electrically charges a propellant and expels it at nearly the speed of light using an electric or magnetic field. Because charging propellant and generating electric or magnetic fields require a great deal of electricity, only a small amount of propellant can be ionized and expelled. This means in turn that an ion thruster permits only very gradual acceleration despite the speed at which propellant leaves it; one can, however, in theory operate an ion thruster for months or years, enabling it to push a spacecraft to high velocities.


American rocket pioneer Robert Goddard first wrote of electric propulsion in his notebooks in 1906. By 1916 be had begun experiments with “electrified jets.” Interest faded in the 1920s and resumed in the 1940s. The list of ion drive experimenters and theorists reads like a “Who’s Who” of early space research: L. Shepherd and A. V. Cleaver in Britain, L. Spitzer and H. Tsien in the United States, and E. Sanger in West Germany all contributed to the development of ion before 1955.


In 1954, Ernst Stuhlinger, a member of the team of German rocketeers the U.S. Army brought to the United States at the end of the Second World War, began small-scale research into ion-drive spacecraft designs while working to develop missiles for the Army Ballistic Missile Agency (ABMA) at Redstone Arsenal in Huntsville, Alabama. His first design relied on solar concentrators for electricity, but he soon switched to nuclear-electric designs. In these, a reactor heated a working fluid which drove an electricity-generating turbine. The fluid then circulated through a radiator to shed waste heat before returning to the reactor to repeat the cycle.


Stuhlinger became a NASA employee in 1960 with the creation of the Marshall Space Flight Center (MSFC) out of ABMA. In March 1962, barely 10 months after Kennedy’s speech, the American Rocket Society hosted its second Electric Propulsion Conference in Berkeley, California. Stuhlinger was conference chairman. About 500 engineers heard 74 technical papers on a wide range of electric-propulsion-related topics, making it perhaps the largest professional gathering ever devoted solely to electric propulsion.


Among the papers was one reporting results of ion propulsion studies performed at the Jet Propulsion Laboratory (JPL) in Pasadena, California. JPL formed its electric-propulsion group in 1959 and commenced in-depth studies the following year.


One JPL study compared different forms of “high-energy” propulsion to determine which, if any, could perform 15 robotic space missions of interest to scientists. The missions were: Venus, Mars, Mercury, Jupiter, Saturn, and Pluto flybys; Venus, Mars, Mercury, Jupiter, and Saturn orbiters; a solar probe in orbit at about 10% of the Earth-Sun distance of 93 million miles; and “extra-ecliptic” missions to orbits tilted 15°, 30°, and 45° with respect to the plane of the ecliptic. In keeping with their robotic payloads, all were “one-way” missions.


The six-person JPL team calculated that a three-stage, seven-million-pound chemical-propellant Nova rocket capable of placing 300,000 pounds of hardware – mainly a massive chemical-propellant Earth-orbit departure stage – into 300-mile-high Earth orbit could with a meaningful scientific instrument payload achieve just eight of the 15 missions: specifically, the Venus, Mars, Mercury, Jupiter, and Saturn flybys; the Venus and Mars orbiters; and the 15° extra-ecliptic mission.


A chemical/nuclear-thermal hybrid rocket/spacecraft comprising a Saturn S-I first stage, a 79,000-pound Kiwi-derived nuclear-thermal second stage, and a 79,000-pound Kiwi-derived nuclear-thermal stage/interplanetary payload could carry out the Nova missions plus the 30° extra-ecliptic mission. By contrast, a 1500-kilowatt, 45,000-pound ion system starting from a 300-mile-high Earth orbit could achieve all 15 missions.


In several instances involving more distant interplanetary targets – for example, the Saturn flyby – the slow-accelerating ion system could reach its target hundreds of days ahead of the Nova and chemical/nuclear-thermal hybrid systems. It could also provide its instrument payloads and long-range telecommunications system with ample electrical power, boosting data return. A smaller system (600-kilowatts, 20,000 pounds) that could be launched atop the planned Saturn C-1 rocket could accomplish all but the extra-ecliptic 45° mission.


Missiles and Rockets magazine devoted a two-page article to the JPL study. It headlined its report “Electric Tops for High-Energy Trips,” which must have been gratifying for long-time ion-drive supporters.


In 1962, NASA Headquarters opted to concentrate electric propulsion research at the Lewis Research Center in Cleveland, Ohio. Research did not stop entirely at NASA MSFC and JPL, however. Stuhlinger, for example, continued to produce designs for piloted ion-drive spacecraft as late as 1966.


Ironically, while the electric-propulsion engineers met near San Francisco, a young mathematician near Los Angeles was, with the aid of a large JPL computer, busy eliminating any immediate need for ion drive or any other kind of high-energy propulsion system in planetary exploration. The third part of this three-part series of posts will examine his work and its profound impact on planetary exploration.


References


“Electric Tops for High Energy Trips,” Missiles and Rockets, 2 April 1962, pp. 34-35.


“Electric Spacecraft – Progress 1962,” D. Langmuir, Astronautics, June 1962, pp. 20-25.


“The Development of Nuclear Rocket Propulsion in the United States,” W. House, Journal of the British Interplanetary Society, March-April 1964, pp. 306-318.


Ion Propulsion for Space Flight, E. Stuhlinger, McGraw-Hill Book Company, New York, 1964, pp. 1-11.


Nuclear Electric Spacecraft for Unmanned Planetary and Interplanetary Missions, JPL Technical Report No. 32-281, D. Spencer, L. Jaffe, J. Lucas, O. Merrill, and J. Shafer, Jet Propulsion Laboratory, 25 April 1962.


The Electric Space Cruiser for High-Energy Missions, JPL Technical Report No. 32-404, R. Beale, E. Speiser, and J. Womack, Jet Propulsion Laboratory, 8 June 1963.


Related Beyond Apollo Posts


http://ift.tt/17S1zbA


http://ift.tt/1xzh2C8


http://ift.tt/17S1C7f


http://ift.tt/1xzh2Cd



Sprint’s Net Neutrality Reversal Shows How Bad Things Are for ISPs


File picture of people walking past a Sprint store in New York

© ANDREW KELLY/Reuters/Corbis



When Federal Communications Commission Chairman Tom Wheeler hinted that he might support Title II—meaning his agency could end up treating internet service providers much like “common carrier” telephone companies—we knew that Google and Netflix were pleased. Title II would upload “net neutrality,” the notion that ISPs must treat all content equally, and that’s what the Googles and the Netflixes want.


By the same token, most internet watchers assumed that the big internet service providers would hate the idea. After all, AT&T CEO Randall Stephenson said that he’s worried enough a out the specter of Title II to pause a 100 city fiber build-out. If ISPs are common carriers, he argued, then they have little reason to expand their operations. But as it turns out, the situation is more complicated than it might seem.


On Thursday, Sprint said that, unlike the AT&T, it’s actually ok with the Title II idea. In a FCC filing Sprint’s CTO, Stephen Bye, wrote: “Sprint does not believe that a light touch application of Title II, including appropriate forebearance, would harm the continued investment in, and deployment of, mobile broadband services.”


Sprint is a wireless carrier, so it may not exactly share the same concerns as AT&T or Verizon, both of whom have capital-intensive land-line businesses as well. Though Wireless voice is already covered under Title II, the data side of things is not. But like AT&T, Verizon has said that Title II would be bad for their mobile broadband services, and T-Mobile seems to feel that way too.


So why has Sprint taken the maverick position on Title II? For one thing, it’s an underdog, trailing far behind AT&T and Verizon in the mobile arena. So maybe its willing to take a chance, feeling that it doesn’t have quite as much to lose as the big companies.


That dovetails with one of the worrying trends we’re seeing in the ISP marketplace here in the U.S. Large carriers are starting to charge companies such as Netflix and Google in order to offer speedy content delivery for their movies and TV shows. Net neutrality advocates say this breaks the way the internet works, and they’re asking the FCC to stop it.


But from a purely economic perspective, the big guys can do this because they have so many customers. If Netflix doesn’t work properly on Sprint, that’s a problem, but not a crisis. If it’s choppy on Verizon’s much larger network, though, that can hurt their bottom line.


Because of wireless data caps, the vast majority of movie watching is happening on fixed line networks, but that could eventually change. And if it does, Sprint may find that a Title-II enforced level playing field makes it more competitive. Here’s how Dane Jasper explains things. He’s the CEO at an even smaller internet service provider, California’s Sonic.net.


“It’s just a guess, but perhaps they are in the same position we are: they’re in a distant third place, and do not have enough customers to really monetize ‘captive eyeballs,’ but extorting content sources. By assuring that neither AT&T nor Verizon do either, they can help maintain a level playing field,” Jasper says.


As Jaspar explains it, Comcast is now getting a certain amount of money per customer from Netflix, and Sonic is not. “That allows them to undercut us (in theory, and to some small degree at this point),” he says. “Continued exploitation of a double-ended market could result in the leader being able to undercut all followers, simply because the smaller carriers don’t have enough customers to lever payments out of sources of content.”



With a Hyperloop Test Track, Elon Musk Takes on the Critical Heavy Lifting


HyperLoop_Concept_Philadelphia_01_copyright_(c)_2014_omegabyte3d

HTT/JumpStartFund



The Hyperloop is coming to Texas. That’s the word from Elon Musk, who unveiled his idea for the revolutionary transit system 18 months ago, and yesterday tweeted his plans to build a test track for companies and student teams working to make the idea a reality.


In August 2013, the Tesla Motors and SpaceX CEO gave the world a 57-page alpha white paper, explaining his vision of how the transit system, which would shoot pods full of people around the country in above-ground tubes at 800 mph.


Musk stuck to his standard announcement system—drop big news, keep quiet on details—so we don’t know much about what he’s got in mind, when it would happen, how much it would cost, who would pay for it, or why, exactly, he wants to do it. (On that last one, we suspect the answer is because this thing is a damn awesome idea and he doesn’t want to miss out.)


If Musk does in fact build a test track, in Texas or elsewhere, it would be a huge help to the company that’s made more progress than anyone toward making the Hyperloop happen. The track isn’t the part of this endeavor that’s hard to engineer. “It’s a couple of tubes and a vacuum pump,” says Dirk Ahlborn, CEO of JumpStartFund, an El Segunda, California-based startup that is taking Musk up on his challenge to develop and build the Hyperloop. But, like most chunks of infrastructure, even in prototype sizes, it’s expensive.


If Musk pays for it—hey, the guy’s worth $7.5 billion—it’s a major item JumpStartFund can stop worrying about. “We’ll be able to act faster because that big problem is solved,” Ahlborn says. He hasn’t done the math on how much a test track would cost or how long it would take to build, but imagines it would be a simple affair, since it’s just for testing purposes. It would have one tube instead of the two planned for the commercial version (one for each direction), and would be kept low to the ground.


hyperloop-new-ft

HTT/JumpStartFund



JumpStartFund brought together a group of about 100 engineers all over the country who spend their free time spitballing ideas in exchange for stock options, and have day jobs at companies like Boeing, NASA, Yahoo!, and Airbus. They and a group of 25 students at UCLA’s graduate architecture program are working on a wide array of issues, including route planning, capsule design, and cost analysis.


“It’s hugely feasible” to build a working Hyperloop, says Professor Craig Hodgetts, who’s leading the UCLA team. Besides land acquisition and political concerns, the big concerns are creating a capsule system that feels comfortable and safe for passengers, and how to design a station to accommodate a continuous stream of pods coming and going—the Hyperloop will work more like a ski lift than a railroad.


All that will take time to figure out, Hodgetts says, and if Musk gets busy building track while the JumpStartFund and UCLA folks put together everything else, it could drastically cut down the time the project would take. “It’s just like having a bunch of supercomputers side by side.”



The Challenge of the Planets, Part Two: High Energy


JPL's nuclear-electric "Space Cruiser" could reach Pluto in slightly more than three years.

JPL’s nuclear-electric “Space Cruiser” could reach Pluto from Earth orbit in slightly more than three years. NASA/Jet Propulsion Laboratory



President John F. Kennedy did not call only for a piloted lunar landing by 1970 in his 25 May 1961 “Urgent National Needs” speech before a joint session of the U.S. Congress. Among other things, he sought new money to expand Federal research into nuclear rockets, which, he explained, might one day enable Americans to reach to “the very ends of the solar system.”


Today we know that Americans can reach the “ends” of the Solar System without resort to nuclear propulsion (though a radioisotope system is handy for generating electricity in the dark beyond Jupiter, where solar arrays become impractical). When President Kennedy gave his speech, however, it was widely assumed that “high-energy” propulsion – which for most researchers meant nuclear rockets – would be desirable for round-trip journeys to Mars and Venus and a necessity for voyages beyond those next-door worlds.


In his speech, President Kennedy referred specifically to the joint NASA-Atomic Energy Commission (AEC) ROVER nuclear-thermal rocket program. As the term implies, a nuclear-thermal rocket employs a nuclear reactor to heat a propellant (typically liquid hydrogen) and expel it through a nozzle to generate thrust.

ROVER had begun under U.S. Air Force (USAF)/AEC auspices in 1955. USAF/AEC selected the Kiwi reactor design for nuclear-thermal rocket ground testing in 1957 – a major step forward for the U.S. nuclear rocket program – and USAF relinquished its role the program to NASA in 1958. As President Kennedy gave his speech, U.S. aerospace companies competed for the contract to build NERVA, the first flight-capable nuclear-thermal rocket engine.


Nuclear-thermal propulsion was not the only form of nuclear-powered high-energy propulsion. Another was nuclear-electric propulsion, which can take many forms. This post examines only the form known widely as ion drive.


An ion thruster electrically charges a propellant and expels it at nearly the speed of light using an electric or magnetic field. Because charging propellant and generating electric or magnetic fields require a great deal of electricity, only a small amount of propellant can be ionized and expelled. This means in turn that an ion thruster permits only very gradual acceleration despite the speed at which propellant leaves it; one can, however, in theory operate an ion thruster for months or years, enabling it to push a spacecraft to high velocities.


American rocket pioneer Robert Goddard first wrote of electric propulsion in his notebooks in 1906. By 1916 be had begun experiments with “electrified jets.” Interest faded in the 1920s and resumed in the 1940s. The list of ion drive experimenters and theorists reads like a “Who’s Who” of early space research: L. Shepherd and A. V. Cleaver in Britain, L. Spitzer and H. Tsien in the United States, and E. Sanger in West Germany all contributed to the development of ion before 1955.


In 1954, Ernst Stuhlinger, a member of the team of German rocketeers the U.S. Army brought to the United States at the end of the Second World War, began small-scale research into ion-drive spacecraft designs while working to develop missiles for the Army Ballistic Missile Agency (ABMA) at Redstone Arsenal in Huntsville, Alabama. His first design relied on solar concentrators for electricity, but he soon switched to nuclear-electric designs. In these, a reactor heated a working fluid which drove an electricity-generating turbine. The fluid then circulated through a radiator to shed waste heat before returning to the reactor to repeat the cycle.


Stuhlinger became a NASA employee in 1960 with the creation of the Marshall Space Flight Center (MSFC) out of ABMA. In March 1962, barely 10 months after Kennedy’s speech, the American Rocket Society hosted its second Electric Propulsion Conference in Berkeley, California. Stuhlinger was conference chairman. About 500 engineers heard 74 technical papers on a wide range of electric-propulsion-related topics, making it perhaps the largest professional gathering ever devoted solely to electric propulsion.


Among the papers was one reporting results of ion propulsion studies performed at the Jet Propulsion Laboratory (JPL) in Pasadena, California. JPL formed its electric-propulsion group in 1959 and commenced in-depth studies the following year.


One JPL study compared different forms of “high-energy” propulsion to determine which, if any, could perform 15 robotic space missions of interest to scientists. The missions were: Venus, Mars, Mercury, Jupiter, Saturn, and Pluto flybys; Venus, Mars, Mercury, Jupiter, and Saturn orbiters; a solar probe in orbit at about 10% of the Earth-Sun distance of 93 million miles; and “extra-ecliptic” missions to orbits tilted 15°, 30°, and 45° with respect to the plane of the ecliptic. In keeping with their robotic payloads, all were “one-way” missions.


The six-person JPL team calculated that a three-stage, seven-million-pound chemical-propellant Nova rocket capable of placing 300,000 pounds of hardware – mainly a massive chemical-propellant Earth-orbit departure stage – into 300-mile-high Earth orbit could with a meaningful scientific instrument payload achieve just eight of the 15 missions: specifically, the Venus, Mars, Mercury, Jupiter, and Saturn flybys; the Venus and Mars orbiters; and the 15° extra-ecliptic mission.


A chemical/nuclear-thermal hybrid rocket/spacecraft comprising a Saturn S-I first stage, a 79,000-pound Kiwi-derived nuclear-thermal second stage, and a 79,000-pound Kiwi-derived nuclear-thermal stage/interplanetary payload could carry out the Nova missions plus the 30° extra-ecliptic mission. By contrast, a 1500-kilowatt, 45,000-pound ion system starting from a 300-mile-high Earth orbit could achieve all 15 missions.


In several instances involving more distant interplanetary targets – for example, the Saturn flyby – the slow-accelerating ion system could reach its target hundreds of days ahead of the Nova and chemical/nuclear-thermal hybrid systems. It could also provide its instrument payloads and long-range telecommunications system with ample electrical power, boosting data return. A smaller system (600-kilowatts, 20,000 pounds) that could be launched atop the planned Saturn C-1 rocket could accomplish all but the extra-ecliptic 45° mission.


Missiles and Rockets magazine devoted a two-page article to the JPL study. It headlined its report “Electric Tops for High-Energy Trips,” which must have been gratifying for long-time ion-drive supporters.


In 1962, NASA Headquarters opted to concentrate electric propulsion research at the Lewis Research Center in Cleveland, Ohio. Research did not stop entirely at NASA MSFC and JPL, however. Stuhlinger, for example, continued to produce designs for piloted ion-drive spacecraft as late as 1966.


Ironically, while the electric-propulsion engineers met near San Francisco, a young mathematician near Los Angeles was, with the aid of a large JPL computer, busy eliminating any immediate need for ion drive or any other kind of high-energy propulsion system in planetary exploration. The third part of this three-part series of posts will examine his work and its profound impact on planetary exploration.


References


“Electric Tops for High Energy Trips,” Missiles and Rockets, 2 April 1962, pp. 34-35.


“Electric Spacecraft – Progress 1962,” D. Langmuir, Astronautics, June 1962, pp. 20-25.


“The Development of Nuclear Rocket Propulsion in the United States,” W. House, Journal of the British Interplanetary Society, March-April 1964, pp. 306-318.


Ion Propulsion for Space Flight, E. Stuhlinger, McGraw-Hill Book Company, New York, 1964, pp. 1-11.


Nuclear Electric Spacecraft for Unmanned Planetary and Interplanetary Missions, JPL Technical Report No. 32-281, D. Spencer, L. Jaffe, J. Lucas, O. Merrill, and J. Shafer, Jet Propulsion Laboratory, 25 April 1962.


The Electric Space Cruiser for High-Energy Missions, JPL Technical Report No. 32-404, R. Beale, E. Speiser, and J. Womack, Jet Propulsion Laboratory, 8 June 1963.


Related Beyond Apollo Posts


http://ift.tt/17S1zbA


http://ift.tt/1xzh2C8


http://ift.tt/17S1C7f


http://ift.tt/1xzh2Cd



Sprint’s Net Neutrality Reversal Shows How Bad Things Are for ISPs


File picture of people walking past a Sprint store in New York

© ANDREW KELLY/Reuters/Corbis



When Federal Communications Commission Chairman Tom Wheeler hinted that he might support Title II—meaning his agency could end up treating internet service providers much like “common carrier” telephone companies—we knew that Google and Netflix were pleased. Title II would upload “net neutrality,” the notion that ISPs must treat all content equally, and that’s what the Googles and the Netflixes want.


By the same token, most internet watchers assumed that the big internet service providers would hate the idea. After all, AT&T CEO Randall Stephenson said that he’s worried enough a out the specter of Title II to pause a 100 city fiber build-out. If ISPs are common carriers, he argued, then they have little reason to expand their operations. But as it turns out, the situation is more complicated than it might seem.


On Thursday, Sprint said that, unlike the AT&T, it’s actually ok with the Title II idea. In a FCC filing Sprint’s CTO, Stephen Bye, wrote: “Sprint does not believe that a light touch application of Title II, including appropriate forebearance, would harm the continued investment in, and deployment of, mobile broadband services.”


Sprint is a wireless carrier, so it may not exactly share the same concerns as AT&T or Verizon, both of whom have capital-intensive land-line businesses as well. Though Wireless voice is already covered under Title II, the data side of things is not. But like AT&T, Verizon has said that Title II would be bad for their mobile broadband services, and T-Mobile seems to feel that way too.


So why has Sprint taken the maverick position on Title II? For one thing, it’s an underdog, trailing far behind AT&T and Verizon in the mobile arena. So maybe its willing to take a chance, feeling that it doesn’t have quite as much to lose as the big companies.


That dovetails with one of the worrying trends we’re seeing in the ISP marketplace here in the U.S. Large carriers are starting to charge companies such as Netflix and Google in order to offer speedy content delivery for their movies and TV shows. Net neutrality advocates say this breaks the way the internet works, and they’re asking the FCC to stop it.


But from a purely economic perspective, the big guys can do this because they have so many customers. If Netflix doesn’t work properly on Sprint, that’s a problem, but not a crisis. If it’s choppy on Verizon’s much larger network, though, that can hurt their bottom line.


Because of wireless data caps, the vast majority of movie watching is happening on fixed line networks, but that could eventually change. And if it does, Sprint may find that a Title-II enforced level playing field makes it more competitive. Here’s how Dane Jasper explains things. He’s the CEO at an even smaller internet service provider, California’s Sonic.net.


“It’s just a guess, but perhaps they are in the same position we are: they’re in a distant third place, and do not have enough customers to really monetize ‘captive eyeballs,’ but extorting content sources. By assuring that neither AT&T nor Verizon do either, they can help maintain a level playing field,” Jasper says.


As Jaspar explains it, Comcast is now getting a certain amount of money per customer from Netflix, and Sonic is not. “That allows them to undercut us (in theory, and to some small degree at this point),” he says. “Continued exploitation of a double-ended market could result in the leader being able to undercut all followers, simply because the smaller carriers don’t have enough customers to lever payments out of sources of content.”



With a Hyperloop Test Track, Elon Musk Takes on the Critical Heavy Lifting


HyperLoop_Concept_Philadelphia_01_copyright_(c)_2014_omegabyte3d

HTT/JumpStartFund



The Hyperloop is coming to Texas. That’s the word from Elon Musk, who unveiled his idea for the revolutionary transit system 18 months ago, and yesterday tweeted his plans to build a test track for companies and student teams working to make the idea a reality.


In August 2013, the Tesla Motors and SpaceX CEO gave the world a 57-page alpha white paper, explaining his vision of how the transit system, which would shoot pods full of people around the country in above-ground tubes at 800 mph.


Musk stuck to his standard announcement system—drop big news, keep quiet on details—so we don’t know much about what he’s got in mind, when it would happen, how much it would cost, who would pay for it, or why, exactly, he wants to do it. (On that last one, we suspect the answer is because this thing is a damn awesome idea and he doesn’t want to miss out.)


If Musk does in fact build a test track, in Texas or elsewhere, it would be a huge help to the company that’s made more progress than anyone toward making the Hyperloop happen. The track isn’t the part of this endeavor that’s hard to engineer. “It’s a couple of tubes and a vacuum pump,” says Dirk Ahlborn, CEO of JumpStartFund, an El Segunda, California-based startup that is taking Musk up on his challenge to develop and build the Hyperloop. But, like most chunks of infrastructure, even in prototype sizes, it’s expensive.


If Musk pays for it—hey, the guy’s worth $7.5 billion—it’s a major item JumpStartFund can stop worrying about. “We’ll be able to act faster because that big problem is solved,” Ahlborn says. He hasn’t done the math on how much a test track would cost or how long it would take to build, but imagines it would be a simple affair, since it’s just for testing purposes. It would have one tube instead of the two planned for the commercial version (one for each direction), and would be kept low to the ground.


hyperloop-new-ft

HTT/JumpStartFund



JumpStartFund brought together a group of about 100 engineers all over the country who spend their free time spitballing ideas in exchange for stock options, and have day jobs at companies like Boeing, NASA, Yahoo!, and Airbus. They and a group of 25 students at UCLA’s graduate architecture program are working on a wide array of issues, including route planning, capsule design, and cost analysis.


“It’s hugely feasible” to build a working Hyperloop, says Professor Craig Hodgetts, who’s leading the UCLA team. Besides land acquisition and political concerns, the big concerns are creating a capsule system that feels comfortable and safe for passengers, and how to design a station to accommodate a continuous stream of pods coming and going—the Hyperloop will work more like a ski lift than a railroad.


All that will take time to figure out, Hodgetts says, and if Musk gets busy building track while the JumpStartFund and UCLA folks put together everything else, it could drastically cut down the time the project would take. “It’s just like having a bunch of supercomputers side by side.”



Why Elon Musk Doesn’t Mind That His Rocket Crashed Into His Robot Boat


Six days ago, SpaceX founder and CEO Elon Musk was atwitter with excitement over the next step in his plan to get humans off the planet. After successfully sending four of its Dragon capsules to resupply the ISS, SpaceX was launching yet another—but this time, with an added level of difficulty. The launcher that gets the capsules into orbit, the Falcon 9, would attempt to land its boost stage—that’s the part with the rocket—back on Earth after the delivery. Well, not exactly on Earth. On a drone spaceport barge. In the middle of the Atlantic ocean.


On the scale of targets that you’re trying to hit from space, this thing is tiny. Usually, space explorers feel lucky when they can target anything roughly the size of a vast body of water when they reenter the atmosphere; indeed, all of SpaceX’s Dragon capsules are designed to splashdown, aided by an impact-mitigating parachute. The Falcon booster has completed two soft landings in the ocean. But this time, it was aiming for a barge just 300 by 170 feet. Musk put the Falcon’s chances for successful landing at 50/50, but a day before the launch, he took it back in a reddit AMA: “I pretty much made that up. I have no idea :).”


All things considered, for the first try, the Falcon could’ve done worse:


Technically, Falcon did hit its target—just at the wrong angle, and a bit off-center. It’s dark, and a little hard to see, but Musk explained exactly what went wrong in a series of tweets today. The grid fins that he had described as crucial to the landing process “lose power and go hardover.” On their own, the rocket’s nitrogen thrusters aren’t powerful enough to deal with the aerodynamic forces at play here. So once the fins run out of hydraulic fluid during the booster’s descent, all bets are off. “Engines fights [sic] to restore, but…Rocket hits hard at ~45 deg angle, smashing legs and engine section.” Then, the leftover fuel and oxygen combine in a big ol’ explosion. The technical summary: “Full RUD (rapid unscheduled disassembly) event,” Musk quipped.


Musk seems pretty nonchalant about losing an asset like this—the booster looks pretty torn up, though the ship suffered only minor damage. (That’s one tough ship.) But Musk’s sanguinity makes sense in the context of his larger plans. At an MIT symposium in October, Musk broke down how important reusing the Falcon will be to getting humans off of Earth. “Reusability is the critical breakthrough needed in rocketry to take things to the next level,” he said. If you can reuse a rocket, each ship from Earth only costs you the fuel it takes to leave—not the $60 million or so it takes to build a Falcon. Landing on a floating platform is the first step to an even greater efficiency: Putting it right back at the launch site from whence it came. “But before we boost back to the launch site and try to land there,” Musk said, “we need to show that we can land with precision over and over again, otherwise something bad could happen.”


Which, you know, turns out to be true.


Luckily, we won’t have to wait long to see another test of the Falcon’s precision landing system: “Next rocket landing on drone ship in 2 to 3 weeks w way more hydraulic fluid,” Musk tweeted. “At least it shd explode for a diff reason.”



Why Elon Musk Doesn’t Mind That His Rocket Crashed Into His Robot Boat


Six days ago, SpaceX founder and CEO Elon Musk was atwitter with excitement over the next step in his plan to get humans off the planet. After successfully sending four of its Dragon capsules to resupply the ISS, SpaceX was launching yet another—but this time, with an added level of difficulty. The launcher that gets the capsules into orbit, the Falcon 9, would attempt to land its boost stage—that’s the part with the rocket—back on Earth after the delivery. Well, not exactly on Earth. On a drone spaceport barge. In the middle of the Atlantic ocean.


On the scale of targets that you’re trying to hit from space, this thing is tiny. Usually, space explorers feel lucky when they can target anything roughly the size of a vast body of water when they reenter the atmosphere; indeed, all of SpaceX’s Dragon capsules are designed to splashdown, aided by an impact-mitigating parachute. The Falcon booster has completed two soft landings in the ocean. But this time, it was aiming for a barge just 300 by 170 feet. Musk put the Falcon’s chances for successful landing at 50/50, but a day before the launch, he took it back in a reddit AMA: “I pretty much made that up. I have no idea :).”


All things considered, for the first try, the Falcon could’ve done worse:


Technically, Falcon did hit its target—just at the wrong angle, and a bit off-center. It’s dark, and a little hard to see, but Musk explained exactly what went wrong in a series of tweets today. The grid fins that he had described as crucial to the landing process “lose power and go hardover.” On their own, the rocket’s nitrogen thrusters aren’t powerful enough to deal with the aerodynamic forces at play here. So once the fins run out of hydraulic fluid during the booster’s descent, all bets are off. “Engines fights [sic] to restore, but…Rocket hits hard at ~45 deg angle, smashing legs and engine section.” Then, the leftover fuel and oxygen combine in a big ol’ explosion. The technical summary: “Full RUD (rapid unscheduled disassembly) event,” Musk quipped.


Musk seems pretty nonchalant about losing an asset like this—the booster looks pretty torn up, though the ship suffered only minor damage. (That’s one tough ship.) But Musk’s sanguinity makes sense in the context of his larger plans. At an MIT symposium in October, Musk broke down how important reusing the Falcon will be to getting humans off of Earth. “Reusability is the critical breakthrough needed in rocketry to take things to the next level,” he said. If you can reuse a rocket, each ship from Earth only costs you the fuel it takes to leave—not the $60 million or so it takes to build a Falcon. Landing on a floating platform is the first step to an even greater efficiency: Putting it right back at the launch site from whence it came. “But before we boost back to the launch site and try to land there,” Musk said, “we need to show that we can land with precision over and over again, otherwise something bad could happen.”


Which, you know, turns out to be true.


Luckily, we won’t have to wait long to see another test of the Falcon’s precision landing system: “Next rocket landing on drone ship in 2 to 3 weeks w way more hydraulic fluid,” Musk tweeted. “At least it shd explode for a diff reason.”



Tech Time Warp of the Week: Before WWII, ‘Computers’ Were Rooms Full of Humans


Nowadays, just about everyone is trying to build computers that can think and reason like humans. Or so out seems. Google and Facebook and Baidu and IBM are doing it. So are startups like MetaMind and Viv. And new players pop up all the time. But but as recently as World War II, the idea of a non-human computer was a completely foreign to most people.


Confused? Well, the word “computer” once referred to a person whose job involved doing calculations by hand. As explained in the ComputerHistory mini-documentary above, back in the days before there were digital computers, mathematicians would break complex problems into smaller parts and farm the work out to individual human computers to solve. As far back as the mid 1700s for astronomical calculations, this process allowed mathematicians to quickly solve problems they couldn’t before.


Early computers were mostly women, the documentary says, because they would work for less money than men. It was the beginning of a long and unfortunate trend. The place of women in the world of computing remains undervalued today—though that’s beginning to change.


Eventually, many of these female “computers” went on to work on the ENIAC, one of the first general purpose electronic computers, and the good news is that they’re now recognized as some of the pioneers of programming.


Thanks, in part, to them, computation is now automated. But the idea of breaking large computational problems into smaller parts hasn’t gone away. That’s how modern supercomputing services work, from Hadoop on down.



Ready for What’s Next? Envision a Future Where Your Personal Information Is Digital Currency


cashroll_660

401(K) 2013/Flickr



Have you heard the credit card commercial that asks, “what’s in your wallet?” Before long, the answer is going to be “you.”


As our digital footprint grows, I see personal information becoming a form of digital currency – something that an individual person owns, controls and uses in exchange for “personalized” goods and services.


An estimated 90 percent of American adults have a cell phone; 87 percent use the internet and about 74 percent of those users participate in social media, according to the Pew Research Internet Project.


Across the globe today, companies make money by selling or sharing the digital data left behind by these users. I see a future where the model is flipped – a world where consumers are in control of their own data and approve its release and use for personal benefits or for social good.


What data are we talking about? Take a minute to think about your online accounts and activities. You will be surprised at the length of the list: email; social media; web history; services such as television cable, electric power, cell phones; online banking; credit cards; apps such as game sites and coupons; even location data from your cell phone and GPS device. In addition to all that, a lot of information is preserved on private data servers, such as electronic health records.


Today, much of this digital information is not under your control. For example, you “trade” information with an email provider to create an email account. But it’s hard to know how that service provider will, in turn, use your information. Once you provide the data points, they are out of your hands. In the future, I see consumers having a better understanding of how information is collected, who accesses it and what it’s used for. I also think they will have a lot more say in how it is used.


Let’s say I want to work with my physician on a family healthcare plan. In addition to my family’s healthcare records, technology will make it possible for us to share lifestyle information such as what items we bought while grocery shopping during the past month; our exercise routine; what places we visited and what websites we surfed. All of this information will help the doctor make more precise recommendations about our family’s health and wellness. We also could opt to share the data (after being anonymized) to help the Center for Disease Control and Prevention track and control epidemics.


How will this happen? I envision a digital footprint “wallet” that tracks and controls my data, recommending what, when, and how it can be used to benefit me, my friends and family, or society. For example, let’s say I want to book a family vacation to an exotic place, and need help making arrangements. My digital assistant may suggest I trade three months’ worth of my Facebook data for the travel booking services. It can also tell the travel agent my preferences, so the trip is customized.


Or, perhaps my digital assistant will ask me if I want to share my shopping history with a store to get recommendations from their fall line (no, I do not) or share my water bill history with a drought prevention group looking to understand usage patterns (yes, I do).


There are many technological and psychological factors we have to overcome before this approach becomes possible. User privacy, data security and consumer education are three areas I see advancing quickly. It could be that the digital wallet leads to digital pickpockets, as well.


Tong Sun leads the Scalable Data Analytics Research Lab at Xerox’s PARC.



5 Charts That Explain 2014’s Record-Smashing Heat


2014 was the hottest year since record-keeping began way back in the nineteenth century, according to reports released Friday by NASA and the National Oceanic and Atmospheric Administration. According to NASA, the Earth has now warmed roughly 1.4 degrees Fahrenheit since 1880, and most of that increase is the result of greenhouse gases released by humans. Nine of the 10 warmest years on record have occurred since 2000.


NASA and NOAA both conducted their own independent analyses of the data. But as you can see in the chart below, their results were nearly identical (all images below are from NASA and NOAA’s joint presentation):


nasa_slide4_0


The record warmth wasn’t spread evenly across the globe. Europe, parts of Asia, Alaska, and the Arctic were extremely warm. At the same time, the US Midwest and East Coast were unusually cold, according to NASA’s analysis:


Screen Shot 2015-01-16 at 11.12.02 AM


Here’s another version of that map, from the NOAA analysis. This one shows that vast swaths of the oceans experienced record warm temperatures in 2014. Land temperatures in 2014 were actually the fourth warmest on record. But the oceans were so warm that the Earth as a whole was the hottest it has ever been since we started measuring:nasa_slide2 (1)


All that warmth has led to a significant loss of sea ice in the Arctic. In 2014, Arctic sea ice reached its sixth lowest extent on record. It was a different story at the South Pole, however. Antarctica saw its highest extent of sea ice on record. According to NASA’s Gavin Schmidt, the factors affecting sea ice in Antarctica—changes in wind patterns, for example—seem to be “more complicated” than in the Arctic, where temperatures and ice extent correlate strongly:


nasa_slide8


So what’s causing this dramatic warming trend? In short, we are. Check out these charts, which show that if we weren’t pumping greenhouse gases into the atmosphere, the planet would actually be cooling right now:


nasa_slide9



The New World of Cutthroat Apps


cash-dash-ft

Wanderful Media



There’s a new crop of apps intensifying competition among brick-and-mortar retailers by giving consumers a faster means of comparison and more advanced personalization. What this means is that businesses have to deal with the linear change of ever-increasing consumer options.


One new app that’s particularly brutal for retailers is Find&Save. Once upon a time, Find&Save was a fairly benign little program designed to offer its users coupons to nearby stores. But recently the CEO of Wanderful Media, the company who provides Find&Save, announced to the world that they’ve added a new feature to the app. It’s called Cash Dash. Cash Dash lets retailers offer you promotions the moment you walk into a competitor’s store, or even the moment you simply walk past the competitor’s store.


Ben Smith, the CEO of Wanderful Media and one of the brains behind Cash Dash, was recently quoted saying: “When you’re walking into a Home Depot on Saturday morning, your intent is clear. You’re in home repair mode. That would be a very valuable audience for Lowe’s.”


Devilishly Competitive


If Smith’s plan sounds to you like an almost sinister level of retail competition genius, you’re right, and major retailers and their customers agree with you. Companies are already signing up in droves to do promotions through Cash Dash, and consumers are happily downloading the app that promises to give them inside intelligence and red-alerts for great deals.


[ From WIRED Business: An App That Helps Retailers Steal Each Other’s Customers ]


It’s already been the case for some time now that a smartphone is a great aid to smart shopping. According to a recent study by Accenture, 68 percent of consumers check out items and prices in stores and then search for lower prices for those same items online, a practice now widely known as “showrooming.” So perhaps it comes as no surprise that now brick-and-mortar retailers are looking for ways to turn the information transparency made possible by smartphones to their own advantage.


The Hard Trend of Customers Going Mobile


Another major hard trend that apps like Find&Save reflects is that in order to compete with online shopping options, brick-and-mortar retailers absolutely have to leverage mobile technology.


In the physical world, retailers have become fairly good at tracking the moves their shoppers make in their stores, mostly through the information gathered in loyalty schemes and membership programs. However, they’ve been nowhere near as good at collecting and leveraging this kind of data as online retailers like Amazon.


In the vast majority of cases, brick-and-mortar retailers can only leverage this data when customers check out at the register. Before the mobile revolution, brick-and-mortar retailers didn’t have a way of knowing what shoppers looked at but didn’t buy, or whether they forgot something that they probably needed. Digital engagement until this point has given e-commerce sites like Amazon and Zappos a giant advantage. Digital stores are able to easily track those things that until now, brick-and-mortar retailers haven’t been able to monitor.


With mobile technology, though, retailers are now able to update their traditional loyalty programs. For example, Walgreen’s now offers a one-stop-shop app that customers can use to get prescription refills via barcode scanning, plus medication reminders, photography orders, and loyalty card point tracking. The solutions that Walgreens offers customers via its app are gradually personalized based on the customer’s activity (a la Amazon’s book recommendations) and triggers are captured through the app, creating a high degree of consumer stickiness.


Thanks to its well-conceived app, Walgreens can now celebrate the fact that more than 50 percent of its online orders now come from mobile refills, up from 10 percent in 2010. This isn’t going away.


The lesson here: Don’t protect your cash cow. While you’re stuck guarding it, the world is going to change around you. Instead, you’ve got to extend and embrace the new reality that’s dawning on the horizon. Increasing transparency, ease of purchasing, and mobile-powered shopping are simply hard trends that are here to stay. Your business can’t compete on price alone. Your product or service needs another advantage now, such as speed, availability, or delivery.


Don’t just think of the cheapest way to offer your product or service. Think of the way to offer it that most meets the convenience needs of a hungry market.


Daniel Burrus is the founder and CEO of Burrus Research. He is the author of six books including the New York Times best seller “Flash Foresight.”



Tech Time Warp of the Week: Before WWII, ‘Computers’ Were Rooms Full of Humans


Nowadays, just about everyone is trying to build computers that can think and reason like humans. Or so out seems. Google and Facebook and Baidu and IBM are doing it. So are startups like MetaMind and Viv. And new players pop up all the time. But but as recently as World War II, the idea of a non-human computer was a completely foreign to most people.


Confused? Well, the word “computer” once referred to a person whose job involved doing calculations by hand. As explained in the ComputerHistory mini-documentary above, back in the days before there were digital computers, mathematicians would break complex problems into smaller parts and farm the work out to individual human computers to solve. As far back as the mid 1700s for astronomical calculations, this process allowed mathematicians to quickly solve problems they couldn’t before.


Early computers were mostly women, the documentary says, because they would work for less money than men. It was the beginning of a long and unfortunate trend. The place of women in the world of computing remains undervalued today—though that’s beginning to change.


Eventually, many of these female “computers” went on to work on the ENIAC, one of the first general purpose electronic computers, and the good news is that they’re now recognized as some of the pioneers of programming.


Thanks, in part, to them, computation is now automated. But the idea of breaking large computational problems into smaller parts hasn’t gone away. That’s how modern supercomputing services work, from Hadoop on down.



New salmonella serotype discovered

Lubbock is known for many things. Some of them are reasons to celebrate, like being the home of Buddy Holly. Some portray the city in negative ways, like dust storms.



The latest honor to come Lubbock's way may not sound good at first, but when realizing it's a breakthrough in biological sciences, it will become something to brag about.


Marie Bugarel, a research assistant professor at Texas Tech University's Department of Animal and Food Sciences in the College of Agricultural Sciences and Natural Resources, has discovered a new serotype of the salmonella bacteria. The new serotype was confirmed by the Pasteur Institute in Paris, the international reference center for salmonella.


Because convention calls for a new serotype to be named after the city in which it is discovered, this one will be called Salmonella Lubbock (officially Salmonella enterica subsp. enterica Lubbock).


"More important than the name, however, is that this discovery illustrates there is more that needs to be discovered about salmonella and how it interacts with cattle populations," said Guy Loneragan, a professor of food safety and public health who, along with Kendra Nightingale, are Bugarel's mentors at Texas Tech. "With this understanding will come awareness of how to intervene to break the ecological cycle and reduce salmonella in animals and in beef, pork and chicken products."


Bugarel, who came to Texas Tech with an extensive background in salmonella research, has worked on developing new tools to detect salmonella, new approaches to distinguish serotypes and ways to understand salmonella's biology.


Her work has led to a patent application that has been licensed to a high-tech biosciences research company. Her invention means it is now possible to simultaneously detect and distinguish specific strains of salmonella by targeting a specific combination of DNA. That will allow for early detection in food while also identifying whether or belongs to a highly pathogenic strain.


In her research for Salmonella Lubbock, the impetus was to reduce salmonella in food and improve public health. She focused on providing solutions to control salmonella in cattle population, which led to a better understanding of the biological makeup of salmonella itself, including its genetic makeup. Through this approach, Bugarel discovered the new strain never before described.


The long-held standard way of distinguishing one strain of salmonella from another is called serotyping and is based on the molecules on the surface of the bacterium. Each serotype has its own pattern of molecules, called antigens, and the collection of molecules provides a unique molecular appearance. These antigens interact with certain antibodies found in specifically prepared serum, thus providing the serotype. It is similar to how blood typing is performed.


"This discovery reinforces my feeling that the microbiological flora present in cattle in the United States harbors a singularity, which is an additional justification of the research we are doing in the International Center for Food Industry Excellence (ICFIE) laboratories at Texas Tech," Bugarel said. "Additional research will be performed to better describe the characteristics of this atypical bacterial flora and, more specifically, of the Lubbock serotype."


With this discovery, Loneragan believes between 20 and 30 percent of two current strains, Salmonella Montevideo and Salmonella Mbandaka, will be reclassified as Salmonella Lubbock. The algorithm used in serotyping has some stopping points, but Bugarel discovered a need to go a step further to get the correct strain name. Therefore some of those strains called Montevideo and Mbandaka are now Salmonella Lubbock.


Some of the strains of Salmonella Lubbock fall into the category of serotype patterns that are more broadly resistant to many families of antibiotics, furthering the need for more research on the subject. Human susceptibility to the Lubbock strains remains unknown.


"We will continue to develop methods to detect, identify and control the presence of pathogenic microorganisms in food products in order to improve food safety and public health," Bugarel said.


"Kendra and I have been honored to serve as Marie's mentors," Loneragan said. "But now, the growth in Marie's expertise means that she is becoming the mentor to us. Many students, and the citizens of the United States in general and Texas in particular, are benefitting from her commitment to research excellence at Texas Tech. We are very lucky to have her."




Story Source:


The above story is based on materials provided by Texas Tech University . Note: Materials may be edited for content and length.



Ready for What’s Next? Envision a Future Where Your Personal Information Is Digital Currency


cashroll_660

401(K) 2013/Flickr



Have you heard the credit card commercial that asks, “what’s in your wallet?” Before long, the answer is going to be “you.”


As our digital footprint grows, I see personal information becoming a form of digital currency – something that an individual person owns, controls and uses in exchange for “personalized” goods and services.


An estimated 90 percent of American adults have a cell phone; 87 percent use the internet and about 74 percent of those users participate in social media, according to the Pew Research Internet Project.


Across the globe today, companies make money by selling or sharing the digital data left behind by these users. I see a future where the model is flipped – a world where consumers are in control of their own data and approve its release and use for personal benefits or for social good.


What data are we talking about? Take a minute to think about your online accounts and activities. You will be surprised at the length of the list: email; social media; web history; services such as television cable, electric power, cell phones; online banking; credit cards; apps such as game sites and coupons; even location data from your cell phone and GPS device. In addition to all that, a lot of information is preserved on private data servers, such as electronic health records.


Today, much of this digital information is not under your control. For example, you “trade” information with an email provider to create an email account. But it’s hard to know how that service provider will, in turn, use your information. Once you provide the data points, they are out of your hands. In the future, I see consumers having a better understanding of how information is collected, who accesses it and what it’s used for. I also think they will have a lot more say in how it is used.


Let’s say I want to work with my physician on a family healthcare plan. In addition to my family’s healthcare records, technology will make it possible for us to share lifestyle information such as what items we bought while grocery shopping during the past month; our exercise routine; what places we visited and what websites we surfed. All of this information will help the doctor make more precise recommendations about our family’s health and wellness. We also could opt to share the data (after being anonymized) to help the Center for Disease Control and Prevention track and control epidemics.


How will this happen? I envision a digital footprint “wallet” that tracks and controls my data, recommending what, when, and how it can be used to benefit me, my friends and family, or society. For example, let’s say I want to book a family vacation to an exotic place, and need help making arrangements. My digital assistant may suggest I trade three months’ worth of my Facebook data for the travel booking services. It can also tell the travel agent my preferences, so the trip is customized.


Or, perhaps my digital assistant will ask me if I want to share my shopping history with a store to get recommendations from their fall line (no, I do not) or share my water bill history with a drought prevention group looking to understand usage patterns (yes, I do).


There are many technological and psychological factors we have to overcome before this approach becomes possible. User privacy, data security and consumer education are three areas I see advancing quickly. It could be that the digital wallet leads to digital pickpockets, as well.


Tong Sun leads the Scalable Data Analytics Research Lab at Xerox’s PARC.



5 Charts That Explain 2014’s Record-Smashing Heat


2014 was the hottest year since record-keeping began way back in the nineteenth century, according to reports released Friday by NASA and the National Oceanic and Atmospheric Administration. According to NASA, the Earth has now warmed roughly 1.4 degrees Fahrenheit since 1880, and most of that increase is the result of greenhouse gases released by humans. Nine of the 10 warmest years on record have occurred since 2000.


NASA and NOAA both conducted their own independent analyses of the data. But as you can see in the chart below, their results were nearly identical (all images below are from NASA and NOAA’s joint presentation):


nasa_slide4_0


The record warmth wasn’t spread evenly across the globe. Europe, parts of Asia, Alaska, and the Arctic were extremely warm. At the same time, the US Midwest and East Coast were unusually cold, according to NASA’s analysis:


Screen Shot 2015-01-16 at 11.12.02 AM


Here’s another version of that map, from the NOAA analysis. This one shows that vast swaths of the oceans experienced record warm temperatures in 2014. Land temperatures in 2014 were actually the fourth warmest on record. But the oceans were so warm that the Earth as a whole was the hottest it has ever been since we started measuring:nasa_slide2 (1)


All that warmth has led to a significant loss of sea ice in the Arctic. In 2014, Arctic sea ice reached its sixth lowest extent on record. It was a different story at the South Pole, however. Antarctica saw its highest extent of sea ice on record. According to NASA’s Gavin Schmidt, the factors affecting sea ice in Antarctica—changes in wind patterns, for example—seem to be “more complicated” than in the Arctic, where temperatures and ice extent correlate strongly:


nasa_slide8


So what’s causing this dramatic warming trend? In short, we are. Check out these charts, which show that if we weren’t pumping greenhouse gases into the atmosphere, the planet would actually be cooling right now:


nasa_slide9



The New World of Cutthroat Apps


cash-dash-ft

Wanderful Media



There’s a new crop of apps intensifying competition among brick-and-mortar retailers by giving consumers a faster means of comparison and more advanced personalization. What this means is that businesses have to deal with the linear change of ever-increasing consumer options.


One new app that’s particularly brutal for retailers is Find&Save. Once upon a time, Find&Save was a fairly benign little program designed to offer its users coupons to nearby stores. But recently the CEO of Wanderful Media, the company who provides Find&Save, announced to the world that they’ve added a new feature to the app. It’s called Cash Dash. Cash Dash lets retailers offer you promotions the moment you walk into a competitor’s store, or even the moment you simply walk past the competitor’s store.


Ben Smith, the CEO of Wanderful Media and one of the brains behind Cash Dash, was recently quoted saying: “When you’re walking into a Home Depot on Saturday morning, your intent is clear. You’re in home repair mode. That would be a very valuable audience for Lowe’s.”


Devilishly Competitive


If Smith’s plan sounds to you like an almost sinister level of retail competition genius, you’re right, and major retailers and their customers agree with you. Companies are already signing up in droves to do promotions through Cash Dash, and consumers are happily downloading the app that promises to give them inside intelligence and red-alerts for great deals.


[ From WIRED Business: An App That Helps Retailers Steal Each Other’s Customers ]


It’s already been the case for some time now that a smartphone is a great aid to smart shopping. According to a recent study by Accenture, 68 percent of consumers check out items and prices in stores and then search for lower prices for those same items online, a practice now widely known as “showrooming.” So perhaps it comes as no surprise that now brick-and-mortar retailers are looking for ways to turn the information transparency made possible by smartphones to their own advantage.


The Hard Trend of Customers Going Mobile


Another major hard trend that apps like Find&Save reflects is that in order to compete with online shopping options, brick-and-mortar retailers absolutely have to leverage mobile technology.


In the physical world, retailers have become fairly good at tracking the moves their shoppers make in their stores, mostly through the information gathered in loyalty schemes and membership programs. However, they’ve been nowhere near as good at collecting and leveraging this kind of data as online retailers like Amazon.


In the vast majority of cases, brick-and-mortar retailers can only leverage this data when customers check out at the register. Before the mobile revolution, brick-and-mortar retailers didn’t have a way of knowing what shoppers looked at but didn’t buy, or whether they forgot something that they probably needed. Digital engagement until this point has given e-commerce sites like Amazon and Zappos a giant advantage. Digital stores are able to easily track those things that until now, brick-and-mortar retailers haven’t been able to monitor.


With mobile technology, though, retailers are now able to update their traditional loyalty programs. For example, Walgreen’s now offers a one-stop-shop app that customers can use to get prescription refills via barcode scanning, plus medication reminders, photography orders, and loyalty card point tracking. The solutions that Walgreens offers customers via its app are gradually personalized based on the customer’s activity (a la Amazon’s book recommendations) and triggers are captured through the app, creating a high degree of consumer stickiness.


Thanks to its well-conceived app, Walgreens can now celebrate the fact that more than 50 percent of its online orders now come from mobile refills, up from 10 percent in 2010. This isn’t going away.


The lesson here: Don’t protect your cash cow. While you’re stuck guarding it, the world is going to change around you. Instead, you’ve got to extend and embrace the new reality that’s dawning on the horizon. Increasing transparency, ease of purchasing, and mobile-powered shopping are simply hard trends that are here to stay. Your business can’t compete on price alone. Your product or service needs another advantage now, such as speed, availability, or delivery.


Don’t just think of the cheapest way to offer your product or service. Think of the way to offer it that most meets the convenience needs of a hungry market.


Daniel Burrus is the founder and CEO of Burrus Research. He is the author of six books including the New York Times best seller “Flash Foresight.”