2024 Summer Research Fellowship Projects
Eighteen projects were selected for the 2024 summer program. All projects were mentored by faculty from all schools across Stevens. Information about each of the selected projects can be found below.
Stepping Up Personalized Human Activity Recognition with Multi-Sensor Fusion and Transfer Learning
Fellow: Dev Satapara, Computer Science Graduate Student
Faculty Mentor and Affiliation: Dr. Damiano Zanotto, Department of Mechanical Engineering
Project Description: Wearable-based Human Activity Recognition (HAR), which refers to using wearable devices to identify a person's physical activities, has numerous applications in fitness/wellbeing and digital health. While this area is relatively mature, current machine learning (ML) methods for HAR have demonstrated moderate accuracy, especially with individuals with neurological conditions. This limitation stems from "one-size-fits-all" models that fail to account for individual differences and unique movement patterns associated with such conditions. Low accuracy poses a significant challenge in clinical research and digital health applications, where it hinders the detection of subtle yet clinically meaningful changes in everyday physical function. These changes, known as digital biomarkers, can reflect underlying neurological conditions or indicate positive responses to new therapies/interventions. This project aims to address this need by developing novel, personalized ML models for HAR. The AIRS fellow will leverage data collected from a set of in-shoe sensors, wrist-worn sensors, phone-embedded sensors, or a combination thereof, to identify the wearer's ambulatory activity. A transfer learning framework will be employed to fine-tune (personalize) both classical and deep learning models to each user, achieving high performance without requiring any subject-specific labeled data (thus ensuring scalability). Wrapper and filter-based feature selection techniques will be used to determine the optimal subset of sensor signals that contribute most significantly to accurate HAR. To train and validate these models with real-world data, the student will utilize an existing, week-long labeled dataset collected by our team. This dataset uniquely employs smart-insoles, wrist-worn sensors, and phone-app data from a group of individuals, and distinguishes itself through its reliance on gold-standard labels obtained using a body-worn camera. The project's deliverables include a technical publication (conference or journal) disseminating these findings to the international digital health community.
Empowering the People with Visual Impairments to Write Independently with Smart Glasses
Fellow: Gabriel Talbert Bunt, Computer Science Undergraduate Student
Faculty Mentor and Affiliation: Dr. Jonggi Hong, Department of Computer Science
Project Description: Visual impairment presents significant challenges in daily life, especially in written communication. While assistive technologies like Braille or text-to-speech software aid individuals with visual impairments (PVI) in accessing written information, limitations remain, particularly when it comes to independent writing on physical paper. Smart glasses have emerged as a promising solution, utilizing advanced optical technologies and augmented reality to enhance users' perception of the world. Leveraging this potential, this research aims to empower individuals with visual impairments to write independently through the development of a handwriting assistance tool on smart glasses. By integrating computer vision and non-visual feedback, we seek to create a comprehensive solution including virtual worksheets and handwriting assistance functionality, ultimately enabling PVI to overcome barriers in generating written content.
The Neural Network AlphaFold2 Accuracy and Limitation Assessment Searching for ML/AI Improvement
Fellow: Rhea Bachani, Chemical Biology Undergraduate Student
Faculty Mentor and Affiliation: Dr. Faith Kim, Department of Chemistry and Chemical Biology
Project Description: The AlphaFold is an AI program developed in late 2018 by DeepMind to predict protein's 3D structure from the amino acid sequence database. It is a bioinformatics approach with the two complimentary considerations: (1) the thermodynamic or kinetic interactions at the atomic and molecular level (2) the evolutionary correlations, homology existing in the amino acid sequence in proteins. The AlphaFold's bioinformatics approach dramatically improved as the 2nd version of AlphaFold and the assessment, CASP14 (Critical Assessment of Structure Prediction) shows the over 90% accuracy. Currently more than 200,000 experimental and 1 million AlphaFold computed protein structures are freely available in RCSB Protein Data Bank (www.rcsb.org) to explore. This investigation will provide a deep understanding of protein structure, how ML/AI overcomes the experimentally and computationally challenging difficulties of the protein structure predictions, and how we can improve the algorithm.
Unsupervised Learning of Multimodal Generative Frameworks
Fellow: Aman Gupta, Machine Learning Graduate Student
Faculty Mentor and Affiliation: Dr. Tian Han, Department of Computer Science
Project Description: Generative AI has been gaining increasing attention in recent years and achieved remarkable progress in many applications across various domains such as computer vision, natural language processing, and healthcare. This project involves exploring and developing novel generative AI frameworks that can generate and learn common knowledge from multi-modal data (e.g., images, texts, videos, etc). Such a framework could benefit a wide range of applications such as text-to-image/video generation, image captioning, language translation, and many more.
Multilingual Persuasion Detection
Fellow: Prasoon Parashar, Computer Science Graduate Student
Faculty Mentor and Affiliation: Dr. Koduvayur Subbalakshmi, Department of Electrical and Computer Engineering
Project Description: Mis/disinformation takes many forms in online media and has been shown to have an adverse effect on public health and democracy, especially as seen in the recent COVID epidemic. This project will design a system to identify discrepancies in information about a query topic between documents in two languages. Specifically, we will design large language model (LLM) based systems that can quantify alignment between any pair of documents presented to it in terms of several axes of persuasion mechanisms.
Learning from Demonstration for Robotic Manipulation
Fellow: Neel Kulkarni, Computer Science Undergraduate Student
Faculty Mentor and Affiliation: Dr. Yi Guo, Department of Electrical and Computer Engineering
Project Description: Robot manipulators need to learn from human demonstrations for tasks at home such as cooking or grasping. The student needs to develop machine learning based algorithms that can learn from human demonstration (such as videos) for robot task and motion planning.
Applying Machine Learning to Accelerate Biologics Formulation Screening
Fellow: Raj Rana, Computer Science Undergraduate Student
Faculty Mentor and Affiliation: Dr. Pin-Kuang Lai, Department of Chemical Engineering and Materials Science
Project Description: Biologics are drugs that are produced from living organisms or contain components of living organisms. Small molecules are commonly added to biologics to stabilize the products. This process is called formulation. Currently, the formulation design is based on a trial-and-error process, which is time-consuming. The lack of knowledge of the interactions between biologics and small molecules hampers the formulation design. In this project, we aim to apply machine learning to identify the key attributes governing the effects of formulation and develop predictive models to facilitate formulation screening.
Multimodal Foundation Models for Efficient Seismic Imaging
Fellow: Amartya Kalra, Computer Science Undergraduate Student
Faculty Mentor and Affiliation: Dr. Zhaozhuo Xu, Department of Computer Science
Project Description: Seismic imaging (SI) defines a challenging task that analyzes the subsurface conditions based on their noisy reflected acoustic signals. Recently, data-driven approaches equipped with Deep learning (DL) have been envisioned to improve SI efficiency. However, these improvements are restricted to highly reduced scale and complexity settings. To extend these approaches to real-scale seismic data in terabytes, researchers must develop novel approaches that digest these massive data and model the subsurface conditions. This proposal targets enhancing SI with multimodal information. We would like to explore whether the natural language descriptions of the SI process can help its performance. Moreover, we would like to dive deeper into the mechanism of multimodal foundation models and explore how the language and visual information interact with each other in the model.
Denoising Bioimaging Data Through Machine Learning: A New Frontier in Macromolecule Analysis
Fellow: Chloe Borentain, Pure and Applied Mathematics Undergraduate Student
Faculty Mentor and Affiliation: Dr. Kwahun Lee, Department of Chemistry and Chemical Biology
Project Description: We propose a machine learning-based approach to enhance the analysis of macromolecule dynamics. Our method will focus on denoising existing video streams to accelerate the accumulation of usable data. Building on data from the Principal Investigator's previous paper (Nano Lett. 2024, 24, 1, 519–524), we aim to integrate machine learning into our custom Matlab code to enhance the denoising process. his enhancement will facilitate precise tracking of nanostructure dynamics, even in scenarios with a low signal-to-noise ratio. Specifically, we will focus on the dynamics of certain populations of gold nanostars that were previously untrackable by the customized Matlab code due to the weak signal after the process of distinguishing nanostars from the background. The completion of this initial phase will set the stage for high-throughput analyses using single-nanoconstruct tracking approaches based on gold nanostars. These emerging nanoprobes exhibit orientation-dependent contrast patterns under differential interference contrast (DIC) microscopy, offering valuable insights into nanoparticle-cell interfaces. Moreover, this project lays the groundwork for the PI's long-term objective: developing machine learning-based real-time signal tracking capabilities for macromolecular dynamics. This includes assessing the developability of antibodies and studying the liquid-liquid phase separation (LLPS) of amyloidogenic proteins.
Efficient Communication Payload Reductions for Federated Learning
Fellow: Zongqing Qi, Computer Science Graduate Student
Faculty Mentor and Affiliation: Dr. Xiaodong Yu, Department of Computer Science
Project Description: Federated learning (FL) has been widely used in real scenarios, for example, medical AI. It adopts client-server architecture, allowing geographically distributed clients to contribute to the same model training without sharing their own training samples. While FL can ultimately protect data privacy, it costs expensive data communication. Recent studies show that communication could occupy up to 90% of the training time, especially for large-scale model training across different countries, where the network bandwidth is usually low. Some previous works have attempted to reduce the communication payloads using various data compression approaches. However, there has yet to be a systematic study to answer some open questions, such as: 1) What type of data is most suitable to be communicated? 2) How frequently these data should be compressed and communicated? 3) To what extent these data can be compressed? In this project, we will thoroughly investigate the current efforts regarding the communication payload compressions for FL, comprehensively evaluate them on the FL simulation frameworks, e.g., APPFL, OpenFL, and FedML, and analyze the results to try to answer the open research questions and explore the fair comparison approach for different data compression techniques. We will use the outcomes of this project to guide our further research about designing a highly efficient adaptive data compression framework for FL.
Snakemake for RNA-Seq Data Analysis
Fellow: Varad Hatteka, Data Science Graduate Student
Faculty Mentor and Affiliation: Dr. Ansu Perekatt, Department of Chemistry and Chemical Biology
Project Description: Project description: RNA-sequencing (RNA-seq) of biological samples generates large amounts of sequencing data. Meaningful interpretation of the RNA-seq data Analysis of the is a multistep process, and involves cleaning up the reads, alignment to the genome, batch correction, and downstream pathway analysis. To improve the scalability and reproducibility of this multistep process, the proposed project seeks to create a pipeline called VIPER, a Snakemake workflow for efficient RNA-seq analysis. This project will result in a fast, efficient, customizable, and ease of use tool for RNA-seq data analysis.
Semiology-GPT: Revolutionizing Seizure Onset Zone Detection with Large Language Models from Semiology Descriptions
Fellow: Yaxi Luo, Computer Science Graduate Student
Faculty Mentor and Affiliation: Dr. Feng Liu, Department of Systems and Enterprises
Project Description: Epilepsy, as one of the most common neurological disorders, affecting 3.4 million people in the United States, including 470,000 children. Seizures in up to one third of epilepsy patients are drug refractory, where surgery offers a necessary means to resect the Epileptogenic Zones (EZs). In this project, we aim to improve the clinical value of seizure semiology, which involves the analysis of stereotypical motor manifestations such as convulsions, tonic, clonic, and hyperkinetic changes, for analyzing the localization and lateralization of seizure onset zone (SOZ). Given the descriptive nature of seizure semiology. The descriptive nature of seizure semiology makes Large Language Models (LLMs) a natural fit to improve the semiology interpretation accuracy and generalization. However, these common-domain models were not trained to capture the medical-domain knowledge specifically or in great detail, resulting in models that often provide incorrect medical responses. Existing literature showed the efficacy of fine tuning of LLMs and adapting adept knowledge to a specific domain. We propose to build the first seizure semiology domain specific Large-Language Model LLM termed as Semiology Generative Pre-trained Transformer, or Semiology-GPT to aggregate all the possible semiology descriptions from existing collaborating epilepsy centers from Rutgers University, Harvard Medical School, UC Davis Medical School, Emory University, and University Hospitals Cleveland Medical Center (UHCMC) at Case Western Reserve University (CWRU), and map those descriptions to the validated EZs (the resected EZs on patient who achieved postsurgical seizure freedom). This will be the first LLM framework specifically on seizure semiology which can improve the clinical value of semiology.
Machine Learning Enhanced Multi-Scale Modeling for the Design of Bio-based Carbon Materials for Wastewater Purification
Fellow: Wei Xu, Computer Science Graduate Student
Faculty Mentor and Affiliation: Dr. Alyssa Hensley, Department of Chemical Engineering and Materials Science
Project Description: Bio-based carbons produced from biomass sources show promise as sustainable sorbent materials to improve water purification technology and tackle the water crisis. However, the complex, amorphous nature of such carbons makes it nearly impossible to determine the fundamental, nanoscale chemical structures and phenomena that lead to efficient binding of pollutant chemicals using experiments alone. Thus, there is a critical need to develop accurate multi-scale models of bio-based carbons for wastewater purification. Currently, the development of such multi-scale models is limited by the (1) chemical complexity of bio-based carbons and (2) existing model size limitations. In this project, we will combine several multi-scale modeling techniques (e.g. density functional theory, molecular dynamics) with machine learning to enable the rapid and rational design of bio-based carbons for heavy metal extraction from wastewater. Overall, this work is expected to enable quantification of the nanoscale properties that determine the structure and performance of bio-based carbons for heavy metal adsorption.
Text-to-ESQ: Question to ElasticSearch Query Generation for Knowledge Screening in Healthcare
Fellow: Sri Jay Adarsh Gogineni, Machine Learning Graduate Student
Faculty Mentor and Affiliation: Dr. Ping Wang, Department of Computer Science
Project Description: In this project, we aim to explore how to leverage advanced natural language processing techniques for Text-to-ESQ task, which directly generates the ElasticSearch queries for the given natural language questions on the NoSQL database about medical records. Therefore, we will investigate the language generation-based models and semantic parsing-based models for query generation. A new large-scale Text-to-ESQ dataset with a complicated query structure will be created using a mixture of automatic generation and manual annotation strategies to train and evaluate these models, where inputs are natural language questions and outputs are intermediate queries that can be transformed into a standard ElasticSearch query using the rule-based method. Finally, different training strategies, such as fine-tuning large pre-trained language models, will be investigated and adopted to train Text-to-ESQ models.
Designing an Intelligent Virtual Assistant for Attention Training in Augmented Reality
Fellow: Katherine Shagalov, Computer Science Undergraduate Student
Faculty Mentor and Affiliation: Dr. Ting Liao, Department of Systems and Enterprises
Project Description: Attention difficulties are when an individual has challenges with sustained attention at a developmentally appropriate level. Attention difficulties can present as the child or emerging adults, such as college students, not being engaged in the classroom. It becomes an intricate problem when many learning and daily activities are online. To address the problem, this project aims to design and develop an intelligent virtual assistant (IVR) in Augmented Reality (AR) for attention training and support. This project first involves building prediction models to understand human attention and cognitive load in real-time using multimodal data from wearable sensors. Then, it aims to propose a design strategy for IVR in terms of its virtual appearance and interaction behavior. The models and design strategy will be implemented and tested in Microsoft Hololens2 or Meta Quest Pro.
Deep Learning for Markerless Biomechanic Motion Tracking
Fellow: Shahaji Nirmal Deshmukh, Data Science Graduate Student
Faculty Mentor and Affiliation: Dr. Jacqueline Libby, Department of Mechanical Engineering
Project Description: Automated camera tracking of the human body can potentially be used for treating patients with movement disorders, such as Stroke, ALS, or Parkinson's disease (to name only a few.) Treatment may consist of movement disorder analysis by a doctor, feedback for robotic assistive devices, or patient biofeedback for neuromuscular repair. Current state-of-the-art biomechanical tracking systems are marker-based, where the subject must be outfitted with special markers and surrounded by a suite of specialized cameras in a controlled laboratory setup, limiting their use in practical medical care. Through the use of open-source deep learning tools, we can now track the body without any markers, using simple webcams. If we can make these systems robust, we can use them for medical treatment in doctor's offices and patient's homes. In this project, students will use a set of cameras and python libraries to build real-time 3D body tracking systems. First, the student will go "deep" into the open-source deep-learning libraries (no pun intended), and explore how we can get under the hood to manipulate these libraries for our own purposes. After that, potential directions that this project can go include: 1) using redundant cameras for robustness, 2) sensor fusion with other biosensing modalities, 3) exploring the scope of generalization (with respect to the subject and/or the environment), 4) integrating these tools into video games for biofeedback therapy, and 5) comparison with other computer vision motion tracking algorithms. Depending on progress, there is a good chance this work can lead to future publications.
Connecting the Dots via Large Language Models
Fellow: Shreyas Desai, Computer Science Graduate Student
Faculty Mentor and Affiliation: Dr. Yue Ning, Department of Computer Science
Project Description: In the era of vast online information, making sense of human events can be challenging. However, with the advent of generative AI, such as large language models, we now possess the capability to delve into rich event contexts and summarize these events. This project aims to explore methodologies utilizing large language models to analyze temporal human events, explaining how they unfold within global spatiotemporal contexts. Our investigation will focus on understanding the roles entities and actors play in event progression, as well as how past occurrences influence future events. Leveraging the reasoning capabilities of large language models, we seek to augment the predictive capabilities of sequence models. Ultimately, we will develop tools and software that empower individuals to sift through information, identify critical components, and forecast future risks.
ML and AI Driven Drug Discovery - Design & Synthesis of Novel SARS-CoV-2 Inhibitors
Fellow: Dilon Coonghe, Biology Undergraduate Student
Faculty Mentor and Affiliation: Dr. Sesha Alluri & Dr. Sunil Paliwal, Department of Chemistry and Chemical Biology
Project Description: Dr. Sesha Alluri & Dr. Sunil Paliwal (Collaborative Project) The SARS-CoV-2 (COVID-19) virus has caused a severe global pandemic since the end of 2019. As treatments, such as vaccination and medication were discovered and distributed, the number of cases, as well as death rate have dropped dramatically. However, reports show that there is an increase of positive cases particularly during winter season and still some deaths or complications following a COVID-19 prognosis. So far, very few oral medications have been developed to treat COVID-19. There is still a critical need for more oral medicines to treat COVID-19. The current research will involve discovering novel inhibitors for SARS-CoV-2 Main protease using AI and ML computational tools. We also plan to investigate a new class of HIV-1 protease inhibitors that have shown promising binding affinity to SARS-CoV-2 protease. Our work will involve using ML and AI computational tools such as Schrodinger Maestro in designing new oral medicines for COVID-19 by molecular modeling, docking and virtual screening. Based on the analysis the target compounds will be identified the synthetic routes will be designed by Scifinder program’s ML and AI retrosynthetic analysis tool.