6. A Novel Framework of Continuous Human-Activity Recognition using Kinect

We present a novel Coarse-to-Fine framework for continuous  Human Activity Recognition (HAR) using Microsoft Kinect. The activity sequences are captured in the form of 3D skeleton trajectories consisting of 3D positions of 20 joints estimated from the depth data. The recorded sequences are first coarsely grouped into two activity sequences performed during sitting and standing. Next, the activities present in the segmented sequences are recognized into fine-level activities. Activity classification in both stages are performed using Bidirectional Long Short-Term Memory Neural Network (BLSTM-NN) classifier. A total of 1110 continuous activity sequences have been recorded to analyze the performance. Details

5. Independent Bayesian Classifier Combination based Sign Language Recognition using Facial Expression

We present a multimodal framework for SLR system by incorporating facial expression with sign gesture using two different sensors, namely Leap motion and Kinect. Sign gestures are recorded using Leap motion and simultaneously a Kinect is used to capture the facial data of the signer. We have applied Independent Bayesian Classification Combination (IBCC) approach to combine the decision of different modalities for improving recognition performance. The proposed multimodal framework achieves 1.84% and 2.60% gains as compared to uni-modal framework on single and double hand gestures, respectively. Details

4. Multimodal Gait Recognition with Inertial Sensor Data and Video using Evolutionary Algorithm

Human gait is a proven biometric trait with applications in security and authentication. In this approach, gait data is recorded simultaneously using motion sensors and visible-light camera. The signals of the motion sensors are modeled using a long short-term memory neural network and corresponding video recordings are processed using a three-dimensional convolutional neural network. GWO has been used to optimize the parameters during fusion. To test the model, a dataset was developed while the subjects perform four different types of walks, including, normal walk, fast walk, walking while listening to music, and walking while watching multimedia content on a mobile. An overall accuracy of 91.3% has been recorded across all test scenarios.  Details

3. EEG-based Age and Gender Prediction Using Deep BLSTM-LSTM Network Model

Deep BLSTM-LSTM network has been used to construct a hybrid learning framework for prediction of age and gender of a person through EEG analysis. Accuracy of 93.7% and 97.5% have been recorded for age and gender classification problems respectively. Our analysis also reveals that the beta band frequencies are better in predicting the age and gender as compared to other frequency bands of the EEG signals. Details

2. Movie Recommendation System using Sentiment Analysis from Microblogging Data

Recommendation Systems are an important medium of information filtering systems in the modern age, where the enormous amount of data is readily available. RS is mostly used in digital entertainment, such asNetflix, Prime Video, and IMDB, and e-commerce portals such as Amazon, Flipkart, and eBay. 
To improve RS results for the movie, this paper proposes a hybrid RS for the movies that leverage the best of concepts used from collaborative Filtering and content-based filtering along with sentiment analysis of tweets from microblogging sites. Details

1. A Modified-LSTM Model for Continuous Sign Language Recognition using Leap motion

We propose an approach for continuous sequences of gestures or continuous Sign Language Recognition that recognizes a sequence of connected gestures. It is based on splitting of continuous signs into sub-units and modeling them with neural networks. Thus, the consideration of a different combination of sub-units is not required during training. The proposed system has been tested with 942 signed sentences of Indian Sign Language (ISL). The average accuracy of 72.3% and 89.5% has been recorded on signed sentences and isolated sign words, respectively.  Details