The company said, "We’re also open-sourcing OpenAI Evals, our framework for automated evaluation of AI model performance, to allow anyone to report shortcomings in our models to help guide further improvements."
OpenAI has introduced the world to its latest powerful AI model, GPT-4, and refreshingly the first thing they partnered up on with its new capabilities is helping people with visual impairments. Be My Eyes, which lets blind and low vision folks ask sighted people to describe what their phone sees, is getting a “virtual volunteer” that offers AI-powered help at any time.
We’ve written about Be My Eyes plenty of times since it was started in 2015, and of course the rise of computer vision and other tools has figured prominently in its story of helping the visually impaired more easily navigate everyday life.
Be My Eyes Virtual Volunteer
It’s a very concise demonstration of how unfriendly much of our urban and commercial infrastructure is for people with vision issues. And it also shows how useful GPT-4’s multimodal chat can be in the right circumstances.
No doubt human volunteers will continue to be instrumental for users of the Be My Eyes app — there’s no replacing them, only raising the bar for when they’re needed (and indeed they can be summoned immediately if the AI response isn’t good enough).
As an example, the AI helpfully suggests at the gym that “the available machines are the ones without people on them.” Thanks! As OpenAI co-founder Sam Altman said today, the capabilities are more impressive at first blush than once you’ve been using it for a while, but we must also be careful of looking this gift horse in the mouth too closely.
The team at Be My Eyes is working closely with OpenAI and with its community to define and guide its capabilities as its development continues. My Eyes app for the visually impaired. More specifically, the app is set to employ GPT-4’s dynamic image-to-text generator to bring a ‘Virtual Volunteer’ AI feature.
How Virtual Volunteer improves the app
Be My Eyes is an app that connects visually challenged people with a community of volunteers and company representatives via video call. The platform enables them to take help from the volunteers for various daily needs, which may include reading small text or differentiating between colours. However, the app is inherently community-driven, and its users have a dependence on others to help them out.
By incorporating GPT-4 technology, the app may be able to finally overcome this limitation. The new AI model’s ability to analyse images is key to the new Be My Eyes AI offering.
This feature lets visually impaired users share images with the AI and ask questions about the same. As shown in an official clip, the AI can do a number of tasks such as identifying a plant type and locating a particular machine at the gym.
New AI-powered beta feature will expand to more users soon
The Virtual Volunteer feature is currently in closed beta and only available to certain testers. ‘Be My Eyes’ users can register to be on the waitlist for the AI feature, which is set to roll out to more people over the forthcoming weeks. The company has also confirmed that Virtual Volunteer will be free to use for all users of the app.
But the app itself can only do so much, and a core feature was always being able to get a helping hand from a volunteer, who could look through your phone’s camera view and give detailed descriptions or instructions.