CamFind, which aims to be the Google of visual search, raises $4.8M in seed funding
Share on Facebook
Share on Twitter
Share on Google+
Share on Reddit
Share on Email

Left: Co-Founder Dominik Mazur. Right: Co-Founder Brad Folkens. Photo Credit: PR

Last week, it became the first visual search app on Google Glass. With prominent angel investors and a large seed round, perhaps CamFind has enough hype to crack the visual search technology nut

Google, the world’s dominant tech company, built its empire on keyword search. But what if keyword search is passé? What is the next big thing and which company will seize the opportunity?

Back in 2011, CamFind co-founders Dominik Mazur and Brad Folkens had an aha moment when they read a report from eMarketer indicating that search on desktop was declining for the first time in history.

“We started looking at mobile,” Folkens told Geektime. “We realized that visual search on mobile is a really good opportunity.”

Mazur and Folkens looked at the visual products that were already out there, including Google Goggles and Amazon Firefly.

“We would take 10 or 20 photos and they would get one or two correct,” said Folkens. “They were good enough to be a novelty or toy but not a strong utility like keyword search.”

So CamFind set out to succeed where the likes of Amazon and Google had failed.

The company recently raised $4.8M in funding from dozens of individual angel investors, including Kamran Pourzanjani, who sold his company PriceGrabber to Experian for $485 million as well as David Perry, who sold his video game streaming company, Gaikai, to Sony to the tune of $380 million.

Will they succeed?

CamFind has over 11 patents its founders claim it is the “most advanced image recognition software out there.”

“The academic world looks at image recognition as a strict computer vision problem. Most visual search engines get exactly what a user is looking for or completely don’t know what it is.”

CamFind, he says, gives you some sort of answer even if the photo is blurry or taken at an odd angle. For instance, if you take a photo of a woman’s shoe in good lighting from 10 feet away, the app will tell you that it’s a Jimmy Choo shoe. But if the lighting is bad and the image is blurry, the app might still tell you that it’s a red, spike-heeled woman’s shoe.

CamFind’s app was launched in April 2013, and has had 2 million downloads so far. It has processed 21 million visual searches, 83 percent of which were for products of one sort or another.

Last week, CamFind became the first visual search app on Google Glass. Here, you can check out their demo:

“Google did not have visual search for their own device,” said Folkens. “We beat them to it.”

Folkens believes that as Google has grown big and corporate, it has lost its innovative edge. “Their approach is very academic.”

For instance, let’s say you take a photo of a Starbucks logo. Most visual search apps will recognize the logo and turn up “Starbucks” as your search result. But CamFind will mash up the results of several methods of computer vision and instead return a result that is more contextual: “Starbucks ceramic white mug.”

The company has also just launched a public API called CloudSight so that developers can integrate visual search technology into their own platform.

Folkens says his company has been approached by five of the top ten North American retailers as well as three of the five most popular cellphone manufacturers. These cellphone manufacturers hope to include CamFind as a pre-installed app in their phones. If this happens, it will raise the number of installations for the app from 2 million to several hundred million, Folkens claimed.

The app is focused on four categories of images: fashion, automobiles, insects and dogs. Thus, if you point the camera at a car, it should be able to tell you the make and model.

About 0.2 percent of the search platform’s queries are for insects. Mazur says these can come from people who were bitten by a spider and want to find out if it’s poisonous.

“We can usually tell you the exacly species,” said Mazur.

One of the odder uses users have made of the app is to identify various skin rashes. Folkens and Mazur say they thought about teaching the app, which uses deep learning algorithms to get better and better the more images of a certain category it sees, but then realized this would be medically irresponsible.

“There’s so much more to dermatology than just what the rash looks like. There’s drug history and whether the rash is burning, itching or scaling,” said Folkens.

“We just give them the result ‘skin condition,’ and they can scroll through the search results to narrow it down.”

The app makes money through contextual search advertising, which is the same business model as Google search. For its public API, the company charges per search request. While they do earn a commission on any product sold through their searches, the amount is relatively small.

“People most of the time don’t buy something directly from a website affiliate.”

Does it work?

Geektime of course downloaded CamFind and tried it out. We were impressed at its identification of every day objects, such as “blue plastic trash bin,” and “wall mounted fire detector.”

However, when we gave CamFind a sign in Hebrew to decipher, the app was dumbfounded.

“Logo,” it offered.

The truth is that visual search technology has a long way to go, and whoever is able to do it well will certainly reap abundant rewards.

Do you hear that Google? The game is not lost yet.

Share on:Share
Share on Facebook
Share on Twitter
Share on Google+
Share on Reddit
Share on Email
Simona Weinglass

About Simona Weinglass


I’m an old-school journalist who recently decided to pivot into high-tech. I work in high-tech marketing as well as print and broadcast media covering politics, business culture and everything in between.

More Goodies From Funding


Top 10 Philadelphia startups ring loudly

Top 10 Kansas City startups spread across two states

What does your car have to say about you?