MIT Develops a Novel Camouflaging Algorithm That Hides Eyesores


Our world is full of unsightly views: construction sites that mar our streets, condos that block our view of the skyline, an inconveniently positioned air conditioner. We’ve resigned ourselves to these things, but there might be a solution on the way thanks to a new project from MIT that’s exploring how to banish eyesores with custom camouflage.


MIT’s big idea is to create printable camouflage coverings using algorithms. These algorithms pull in environmental data via photographs and construct an image that best blends an object in with its surroundings. Think of it as an invisibility cloak for the stuff you don’t want to see.


The problem with camouflaging objects is that while they remain stationary, we humans are seeing them from various angles.


It sounds easy enough—just slap a brick covering on the utility box outside your apartment building and call it good—but it’s much more complicated than that. Inanimate objects don’t have the luxury of blending like a cuttlefish; it takes a totally new way of thinking about computer vision to hide an object in plain sight.


For so long, the computer vision field has been laser focused on teaching computers how to see things. MIT’s algorithm does the exact opposite. “Often these algorithms work by searching for specific cues—for example they might look for the contours of the object or for distinctive textures,” explains Andrew Owens, an MIT graduate student in electrical engineering and computer science who authored the paper. “With camouflage, you want to avoid these cues; you don’t want the object’s contours to be visible or for its texture to be very distinctive.”


01NewsImage_BestCamo (1)

A cube is wrapped in an algorithm-generated camo covering. Can you spot it from various angles? Image: MIT



The problem with camouflaging objects is that while they remain stationary, we humans are seeing them from various angles. Maybe you’re looking at the utility box head-on, but the person next to you will be seeing it from a slightly different angle. Accounting for these different vantage points is a complex problem for an algorithm to solve. “If you knew exactly where the viewer would be standing then the problem would be solved,” says Owens.


MIT’s algorithm begins by taking eight to 20 photographs from different angles around the object it wants to hide. Once it has this data, the algorithms go to work, finding ways to blend the object into its surroundings from each viewpoint. It’s nearly impossible to match the background from every vantage point; what works from one will make the camouflaged object look totally obvious from another.


This meant the algorithm needed to make tradeoffs on considerations like: How well does an object’s border blend into its background? How distorted is the object’s texture? Can you reduce the seams from as many camera angles as possible to minimize those visual transitions? You can’t win on every account, says Bill Freeman, a professor of computer science and one of Owens’ thesis advisors. “We’re trying to explore computationally what the different tradeoffs are.”


In a real-world example, Owens’ team wrapped a camouflage covering of a bookshelf around a cube. From most angles the cube blends in nicely, but every so often you catch a glimpse of a book spine split in a 90-degree angle. It’s the type of thing you’d casually stroll by without giving a second thought, but once you notice it, it becomes obvious. Visual glitches like these are inevitable, but it’s MIT’s goal to produce an algorithm that has as few as possible. As Freeman puts it: “You have this 3-D mass you want to put down somewhere and paint it to look like there isn’t a big blob there. That’s hard to do.”



No comments:

Post a Comment