Ruixiang (Ryan) Tang
About Me
I'm a Tenure Track Assistant Professor at the Department of Computer Science at Rutgers University-New Brunswick. I earned my Ph.D. in Computer Science from Rice University under the guidance of Dr. Xia Hu. I received my B.S. in Automation from Tsinghua University, where I was advised by Dr. Jiwen Lu.
Our research focus is in the realm of Trustworthy AI, a critical area that demands the infusion of trust throughout the AI lifecycle—from data acquisition and model development to deployment and user interaction. Within this overarching theme, I specialize in issues related to safety, privacy, and explainability. Additionally, I collaborate closely with health informaticians from Yale, UTHealth, and Baylor College of Medicine, leveraging AI to address critical challenges in healthcare.
The Large Language Models bring new and complex challenges to the trustworthiness of AI. Our long-term objective is to develop computational regulatory frameworks for these generative AI systems to prevent potential misuse and ensure their responsible utilization.
🔥🔥🔥 Recruiting 2025 Fall Ph.D. students and research interns! I’m actively looking for strong and motivated students. If you are interested in working with me, please feel free to email me. Please check here for more details.
Research Overview
News (2024)
09/2024: Three papers have been accepted for EMNLP 2024, focusing on issues related to LLM intellectual property protection, bias mitigation, and building trustworthy agents.
08/2024: Our preprint paper introduces a rare disease question-answering (ReDis-QA) dataset to evaluate the performance of LLMs in diagnosing rare diseases.
07/2024: Our paper, Mitigating Relational Bias on Knowledge Graphs, has been accepted by TKDD.
04/2024: 🔥 Our paper, The Science of Detecting LLM-Generated Text, has been accepted for a cover paper in the April 2024 issue of Communications of the ACM (CACM)!
04/2024: Our research about AI for watermarking has been awarded the annual Hershel M. Rich Invention Award, congratulations to the team!
03/2024: One paper has been accepted by the NAACL 2024. We proposed a key prompt protection mechanism to safeguard large language models.
02/2024: One paper has been accepted by the Journal of Biomedical Informatics. We proposed a soft prompt calibration method to mitigate the variance of large language model output in the radiology report summarization task.
01/2024: My recent research on leveraging the LLM for generating high-quality synthetic data has been highlighted by funded NIH AIM-AHEAD and NSF Collaborative Research: III: Medium projects.
01/2024: Our paper, Harnessing the Power of LLMs in Practice: A Survey on ChatGPT and Beyond, has been accepted by TKDD.
News (2023)
10/2023: Our paper about using LLMs for patient-trial matching was selected as the AMIA 2023 Best Student Paper and KDDM Student Innovation Award.
10/2023: Our paper about building knowledge refinement and retrieval systems for interdisciplinarity in biomedical research has been selected as CIKM 2023 Best Demo Paper Honorable Mention.
10/2023: One paper has been accepted by EMNLP 2023. We introduce a membership inference attack aimed at Large Language Models to analyze associated privacy risks.
09/2023: Two papers have been accepted by NeurIPS 2023. We proposed a honeypot mechanism to defend against backdoor attacks in language models.
08/2023: One paper has been accepted by ECML-PKDD 2023. We proposed a serial key protection mechanism for safeguarding DNN models.
08/2023: Two papers have been accepted by CIKM 2023. We proposed a transferable watermark for defending against model extraction attacks.
07/2023: Three Papers have been accepted by AMIA 2023 Annual Symposium. We investigated methods for harnessing the capabilities of LLMs in several healthcare tasks, such as Named Entity Recognition, Relation Extraction, and Patient-Trial Matching.
05/2023: One paper has been accepted by ACL 2023. We reveal that LLMs are "lazy learners" that tend to exploit shortcuts in prompts for downstream tasks.
04/2023: One paper has been accepted by SIGKDD Exploration 2023. We proposed a clean-label backdoor-based watermarking framework for safeguarding training datasets.
03/2023: One paper has been accepted by ICHI 2023. We proposed a deep ensemble framework for improving phenotype prediction from multi-modal data.
Services
Program Committee Member / Reviewer:
Conference: ARR, ICML, NeurIPS, ICLR, EMNLP, CIKM, AAAI, etc.
Journal: Nature Human Behavior, TIPAMI, TKDE, JAMIA, SNAM, etc.
Recognition:
Top Reviewer of NeurIPS 2023.
Research Experiences
Microsoft Research, Redmond, WA, May. 2023 - Aug. 2023
Team: AI for Health Team
Role: Applied Scientist Intern
Project: Medical Dialogue Generation, Personal Identifying Information (PII) Detection
Mentor(s): Gord Lueck, Rodolfo Quispe, Huseyin Inan, Janardhan Kulkarni
Microsoft Research, Remote, May. 2022 - Aug. 2022
Team: AI for Health Team
Role: Research Intern
Project: Medical Dialogue Summarization, Privacy Risk Analysis in Large Language Models
Mentor(s): Gord Lueck, Rodolfo Quispe, Huseyin Inan, Janardhan Kulkarni
Adobe Research, Remote, May. 2021 - Aug. 2021
Team: Document Intelligence Team
Role: Research Intern
Project: Watermarking Computer Vision Foundation Models and APIs
Mentor(s): Curtis Wigington, Rajiv Jain
Microsoft Research Asia, Beijing, China, Mar. 2019 - May. 2019
Team: Social Computing Group
Role: Research Intern
Project: Interpretable Recommendation Systems
Mentor(s): Xiting Wang, Xing Xie
Duke University, Durham, NC, May. 2018 - Aug. 2018
Team: Research of Radiology
Role: Summer Intern
Project: Classification of Chest CT using Case-level Weak Supervision
Mentor: Joseph Y. Lo