Facial Recognition: 2022’s top developments

Video analytics saw a massive growth last year, as the benefits of this AI-powered software are being realised. Liana Meliksetyan, Chief Commercial Officer at NtechLab outlines the important developments in the facial recognition market

The global video analytics market is currently estimated at $5.9 billion and will reach $14.9 billion by 2026, according to B2B research outfit MarketsandMarkets. Within the security industry it is widely used for its ability to recognise faces and other objects in a video stream and then analyse this data. So what key developments were made in 2022?

Products first

In the past companies were developing video analytics algorithms in academic conditions, based on synthetic data. This was essentially in a vacuum with the aim of working out whether the technology could work.

Today, the technology has become much more rooted in actual practice with companies trying to apply algorithms to real-life problems and using this as the basis for developing products, for example, self-checkout counters in stores that provide the option of paying using facial recognition.

Researchers and engineers are working on solving problems using real-life data to develop technology that has wider commercial applications.

As a result, modern algorithms are highly data-driven. The more data, the better. However this leads to issues in accessing the data at speed. Furthermore, quality requirements for pre-processing data are increasing, while faultless data markup and data cleansing is a significant factor in ensuring the accuracy of video analytics systems.

Combining several algorithms into a single product

In the early stages of video analytics development each algorithm was created and used on a stand-alone basis. This means each application is separate. For instance, face recognition is in one system, car recognition in a different product, and the ability to respond to specific situations requires another system. Presently, companies are striving to create integrated products by combining several algorithms into a single solution. As a result of this the algorithm integration of different analytic processes are connected with each other, leading to new opportunities for businesses.

Software developers, for instance, are combining face recognition with recognition of silhouettes, cars, and other objects. This is complemented with action recognition in multi-object video analytics products, in which cameras are configured to recognise different object types, and a single camera can process several different types of objects at once.

Optimisation of computing resources

When developing algorithms companies do not always consider the computing resources required by systems. The first goal is for the technology to work, fulfil the assigned tasks and do so in a stable and efficient manner. As a result, new technologies often require intense computing resources, a developed infrastructure, and high-speed Internet.

Today there is a much better understanding of how video analytics works and its commercial applications. The next challenge arising is optimising technologies so that businesses can use them. Since hardware for large projects is expensive, algorithms are being custom-optimised for smaller scale use.

As a result, the performance of video analytics is being significantly accelerated on relatively average hardware. This is of enormous benefit for businesses who can now save huge amounts of expenditure.

In addition, the fewer computing resources required for running algorithms, the more consumers will be able to use the technology. At the same time, a company can both optimise the product before releasing it while also providing clients with the option of manually customising the application, so it is optimised for their hardware.

This increases the overall convenience of emerging solutions. You no longer need to spend a lot of time setting up a complex system: just buy a product, press a few buttons, and everything will work.

Non-discriminatory artificial intelligence

With the emerging anti-discrimination trends and the proliferation of video analytics around the world, companies are increasingly focusing on ensuring that algorithms work equally well for different ethnic groups the world over.

This is achieved by carrying out a set of tests for each individual ethnic group and ensuring that the accuracy is the same in all cases before releasing any video analytics algorithm into the market. At NtechLab each new algorithm undergoes these tests automatically. If there are problems detected, they are dealt with promptly.

In addition, and importantly, video surveillance systems are becoming more than just a security tool. Developers are looking at the integration of AI-based analytics into much broader areas, such as smart homes, smart factories and smart cities. This provides tremendous potential for creating safer environments for instance, people with disabilities, children, mothers with strollers, and more.

These systems, for example, can detect if a person has fallen on the street and is not moving. In such cases, a signal to emergency services can be sent. City cameras with video analytics systems can also collect data which, upon analysis, can make it clear that, for example, crossings on certain road sections are often used by children and there is a threat to their lives due to the intense traffic flow. This enables the local authority to add speed bumps, additional signs, or a street refuge. In short, and in summary, the potential for video analytics systems is tremendous.

Detection of weapons and aggressive behaviour

Large cities may be safe whether it’s New York or Moscow but 2021 will surely be remembered for a number of high-profile violent incidents that involved weapons, both on the streets and within the walls of educational institutions around the world. In a school in Kazan, and a university in Perm in Russia, dozens of people were killed and injured. This led to a rise in interest in systems related to the detection of weapons and aggressive behaviour.

Today, it’s possible to detect weapons in a video stream, if they are visible, and within the next year the first such systems will appear commercially, operating in real conditions and with the accuracy to identify dangers. However, just like the human eye, it will be extremely difficult for a system to identify a small gun in bad weather.

Another application is a system for detecting aggressive actions and potentially dangerous situations such as fights or a large crowd gathering. This will be introduced next year but in the form of pilot projects.

Look to the future

Artificial intelligence is a human decision support system, whether it’s driving facial, weapon or action recognition. But the decision whether to take action still rests with the system operator. Despite the rapid development of AI, it is still premature to talk about independent decision making.

AI also can’t predict aggression with high accuracy in advance of an incident. Of course, there is huge interest in these technologies, and there are developments within the industry, but in real-world conditions there are too many false positives for it to be viable in this context. And after all, facial emotional expressions don’t always signify impending aggression.

Tags:
No Comments

Sorry, the comment form is closed at this time.