COPCAMS (COgnitive & Perceptive CAMeraS) is an ARTEMIS project funded under grant agreement No. 332913. The project started on April 1, 2013 and ended on September 30, 2016. The project consortium consists of 21 partners from seven European countries.
Vision systems analysing images from multiple cameras will become the norm in the future, be it in large-scale surveillance, advanced manufacturing or traffic monitoring. COPCAMS leverages recent advances in embedded computing platforms to develop large-scale, integrated vision systems. It aims to exploit new programmable accelerators, particularly many-cores, to power a new generation of greener, low-power smart cameras and gateways.
This will be possible owing to a paradigm change: whereas previous generation of systems had simple cameras connected to powerful centralised computing servers through high-bandwidth networking, the COCPAMS vision is to push low-power, high-performance computing on the edge of the system and in the distributed aggregators. These “smart cameras” and “smart aggregators” will process video streams, extract significant semantic information and decide locally whether or not the streams’ content is of interest and is worth propagating. The decentralised, distributed decision-making will save both energy and bandwidth, while opening up opportunities for new distributed applications.
This will be achieved by kick-starting a mixed hardware and software ecosystem, aimed at reducing costs and development cycles. On the hardware side, COPCAMS will use low-power consumption and high computing-power solutions, based on the latest advances in microelectronics and computing architectures, like many-cores. On the software side, the project will focus on established and emerging tools and libraries, like OpenCL and OpenMP, and programming interfaces proposed by industrial consortia. This will enable the COPCAMS ecosystem to cross-breed with other industrial sectors that share similar concerns. It is foreseen that such a rich and open ecosystem will foster the growth of a community of users which may easily share research efforts and, through composability and re-use of standardised components, allow for a dramatic reduction of development times and enable wide cross-domain deployment.
The impact of COPCAMS will cover the complete range of the value chain: academia and SMEs will have advanced many-core platforms, to test and optimise innovative vision, coding and cognitive algorithms. Platform providers will grow an ecosystem of users and will have the possibility to explore new market opportunities. System integrators will benefit from the powerful platforms being developed in COPCAMS and will be able to offer a new generation of vision-related products. And last, but not least, service providers will capitalise on the COPCAMS system, to provide value-added services to end users, way beyond what can be offered today.
FROM VERTICAL TO HORIZONTAL
COPCAMS leverages recent advances in embedded computing platforms to design, prototype and field-test full, large-scale vision systems. It aims to exploit both new, many-core platform and GPU-based embedded architectures, to power a new generation of vision-related devices that are able to extract relevant information from captured images and that autonomously react to the sensed environment by interoperating on a large scale in a distributed manner. COPCAMS will facilitate the transition from the highly vertically structured embedded vision systems market toward a more horizontal market, thereby creating new opportunities to be addressed more easily by SMEs and start-ups.
Due to both algorithmic and computational complexity, embedded vision systems are nowadays conceived as special-purpose devices, dedicated to quite narrow application domains. The COPCAMS solution will represent a significant step towards wider adoption of distributed, flexible embedded vision systems. COPCAMS will provide key enabling technologies to build smart environments, with a first application to surveillance of environments and advanced manufacturing. COPCAMS many-core architecture and its flexible programming model will make the resulting solution more effective than today’s systems, that are based either on embedded processor and GPU or on FPGAs.
ADVANCING THE STATE OF THE ART
COPCAMS will propose: many-core and embedded GPU techniques for image and video analysis, codecs and multi-sensors analysis; pre-processing steps to improve the quality and usefulness of still images; image and video understanding, object classification and recognition; video understanding; highly-parallel video coding schemes; sophisticated data fusion; detection and tacking. On all these fronts, COPCAMS will advance the state of the art for embedded perception and vision algorithms, mainly in two ways: the adaptation of these techniques on many-core and other low-power platforms and the use of open source libraries to enable efficient design and cost reduction.
COPCAMS will have significant impact on all the applications addressed: quality of goods and improved productivity, by better accuracy of inspection and more detailed assembly process monitoring; high flexibility through easy software customisation; larger systems with reduced communication requirements; better in situ image/video analysis, by porting server-class algorithms to embedded systems; better precision and reliability of image/video processing, by using higher spatial and time resolution.
One year after the start of the project, COPCAMS has defined its three mains demonstrators: large area surveillance, advanced manufacturing applications and indoor and outdoor surveillance. The set of target platforms has also been selected; it ranges from advanced low-power many-cores to embedded GPU-based architectures. Some early technology demonstrators have been completed and are already displayed in some international fairs and exhibitions.