Saved videoconferences are processed by few Machine Learning algorithms: #NLP transcribes speach to text, next algorithm performs #FaceRecognition, other identifies speakers activity by lip movement to bind phrases to appropriate speakers. Collected information is structured to metadata related to video and stored in database.
User self-service portal allows users to sort and filter videos by participants, topics, video name, date and length. Portal internal search engine helps users find specific moments inside videos and navigate directly to required video fragments by clicking item in search results.
System can evolve by adding new features: gesture recognition for bookmarking or other purposes, specific object recognition or detection, etc. There are many directions for systems growth. For example, we can add functionality of automatic e-mailing meeting minutes with commitments and meeting conclusion to meeting participants.