- How is the data captured?
Using computer vision (cameras) placed within or around displays, the software captures customer behavior and turns their movement into data, but without taking pictures or recording any footage.
What data is captured?
The data that is collected and stored relates to a measurement of the customer (dwell time, product interaction, demographics, etc.), but is analysis of the footage rather than a capturing of images or the footage directly. The images that are fed through the camera are analyzed in real time on the computer’s RAM using image detection software and stored as time/date stamped lines of text detailing the type of image estimated (body/face) by the relationship between data points of that image.
Is the data matched to other footage or data sets?
The data that is generated by analysis of the camera feed is kept independent of other systems and is never matched to other CCTV footage from traditional cameras or matched to other data that would allow a customer to be identified
Okay, so if you’re using cameras, where are you using them…and why?
The cameras are most commonly built into display and, unless specifically called out, often go unnoticed by shoppers. We use them first and foremost to understand what is and is not resonating with the shoppers. Like any other marketing, retailers want to make sure that that their displays, products, prices and messages are capturing interest and conversion and, if not, can be fine tuned or ‘optimized’ accordingly. Our software gives them the basic metrics to help them make these determinations.
To provide consistency to the metrics we capture, a viewing distance and viewing angle is calculated for a display depending on the size of the screen in the display. This effectively creates a ‘visibility zone’ for the display where the camera’s viewing angle is matched as closely as possible to this visibility zone. Shoppers identified in this zone (with body detection) will have a clear view of the display should they turn towards the display (with face detection). In most cases, the camera’s viewing angle will be perpendicular to the aisle, so only a section of the aisle to the left and right of the camera will be used from this image detection.
Also, the software has the capability to create ‘exclusion zones’, where image analysis data is not recorded–and the retailer can adjust it to suit their policy.
It seems like this could cause some concerns with specific customers. Should there be signage telling the shoppers what’s going on?
The exact metrics we capture can be captured by CCTV cameras and, like CCTV cameras, cannot be avoided by a customer unless they avoid the aisle. However, as this is a new way of collecting and measuring customer data, some retailers choose to post some kind of signage or sticker indicating that cameras are in use, but they are not recording or capturing any image.
On that note, is what you’re doing truly anonymous?
inReality’s Analytics Platform, since its inception, has adopted a “privacy by design” approach to its development effort. It is designed and maintained to meet both GDPR and CCPA requirements. When a sensor type is added to the platform, it is validated to meet these standards.
To ensure that we’re upholding privacy promises, we follow a process that takes privacy into strict account and never store a shopper’s unique facial algorithm. Again, we also do not record or store any personal data. Following is our detailed process:
- The bodies / faces in images or in a set of frames are identified from the camera. It moves into our system for only a few thousand milliseconds for processing.
- The video image is held in RAM (volatile memory) for the time necessary for processing and our software performs real-time analysis and generation of the Non- Personal Identifying Information ( Non-PII ).
- It converts the image into a small set of anonymous data that is stored (# of people, body gestures, estimated age group, estimated gender) and the image frame is permanently deleted.
- The volatile storage area in which incoming digital images are held is overwritten each time a new image is delivered, thereby permanently erasing all traces of previous visual information.
- The visualizations via dashboards and the reports are aggregated by hour and by day, not individualized. Examples are ‘Total Number of Viewable Impressions’ or ‘Percentage Share of Visual Engagement by Day of Week’.
Finally, we never share the raw time-stamped data, ensuring that it can’t be matched to another dataset to personally identify an individual.
How about all the data the software is capturing and correlating around the world. Is it syndicated for aggregated insights?
Unlike some of our competitors, we strictly avoid any kind of syndication or anonymized shopper insights from the data that we control.
Are there data security concerns?
InReality’s SaaS platform is built with an industry standard multi-tenant architecture. It is deployed on Amazon’s Web Services platform and leverages its built-in security features.Cloud Data Storage
- Each Customer’s information and data is partitioned into individual accounts in the InReality Data lake. All data at rest is encrypted and protected.
- To access data associated with a specific partition of the data lake, the application must authenticate and validate the customer account information and permissions.
- Event logs or screening data stored is managed through Amazon S3 Server-Side Encryption: Using SSE-KMS (AWS-SSE-KMS)
- Data lake uses strong AES 256-bit encryption with a hierarchical key model rooted in a hardware security module. Keys are automatically rotated on a regular basis by the service, and data can be automatically re-encrypted (“rekeyed”) on a regular basis.
- Database used for storing all other data is encrypted using AES256-CBC (or 256-bit Advanced Encryption Standard in Cipher Block Chaining mode)
- Data is retained anonymously in the cloud for at least 1 year. Configurations can be set on how long non-anonymized data is retained in the cloud, according to customer requirements.
You are scanning customer bodies and skeletons – is this potentially a form of biometric information and as such might be special category data under the GDPR legislation?
There are two methods to capturing bodies:
- The skeletal method, where a skeletal outline of a body is estimated; and
- Where we merely place a box around a detected body in the image.
Unless required by the specific application, such as when the position of hands or feet are needed for analysis, we use the box method to capture body detection data, thereby creating no biometric data infringement.
Can I have some more information about mood? How do you do this?
This feature effectively assigns a mood estimation based on the distance between particular data points (pixel clusters) on a detected face. For example, if the edges of the mouth are turned downward the software will estimate that the detected face is ‘Sad’, but if the eyebrows are raised this would be captured as ‘Surprised’.
Although this feature is offered as standard through the detection software, it is optional.
Are you willing to undergo a Privacy Impact Assessment?
Yes, we are happy to.
Frequently Asked Questions