EPG Intelligent Correction System

Title: EPG Intelligent Correction System
$alt
Description:

One of the most widely used services among broadcaster's audiences is the Electronic Program Guide (EPG) system. Undoubtedly, the accuracy of this system is one of the most important issues and challenges for audio and video content distributors. Mentioned project designed in order to increase the accuracy and efficiency of such systems and as a result, the effectiveness of them is provided intelligently and without human factors interferes.

 

Basis of work

Briefly, this system works on the basis of comparing and measuring of fingerprints' matching which was extracted from videos in archives and fingerprints received from the antenna (TS) or network stream.

Firstly, appropriate frames and spots that are robust to compression and adding noise to acceptable thresholds are selected. A required number of points from these points on the page, if there are any, which represent a feature vector of that frame is selected. These vectors which pointing to specific frames of a video are stored in a NoSQL database in the form of a series of text files.

By reading and dividing frames which were read from the antenna and sending it to function extractor in threads, simultaneously, trait vectors are calculated and compared and measured for compliance among the vectors in the bank.

For this purpose, the vectors in the data bank, according to their structure, are categorized and indexed with a combination of Mixture Model and Euclidean distance and the vector extracted from the antenna is matched with other vectors, each by each, in its own class of vectors in the database. The highest compliance, regarding threshold, determines the output and, as a result, the type and frame number of the video.

The following figure shows the process.

Practical Results

This system is provided in C++ for feature extractor engine and Java for the search engine on Spark II; During a trial in Islamic Republic of Iran Broadcasting organization, it has been performing well with over 98% accuracy. A maximum of 2% of the error occurs when randomly generating frames in two different videos that have a threshold similarity; Which, of course, would not happen in the continuation of video and next frames. Therefore, with a macroscopic examination resulting from the creation of a string obtained from a delay of a few minutes, a chromosome string can be created and unrelated frames can be replaced with correct frames. Also, using other data and available metadata in the broadcast department, this accuracy can approach 100%.

Whereas the videos read from the antenna (TS) or network stream has an average of 800kbps, which sometimes means that compression is 50 to 70 times! And this is an indication of the power of the feature extractor algorithm that is robust against this type of attacks.