Ruixiang (Ryan) Tang
Ruixiang (Ryan) Tang
About Me
I'm a Tenure Track Assistant Professor at the Department of Computer Science at Rutgers University-New Brunswick. I earned my Ph.D. in Computer Science from Rice University under the guidance of Dr. Xia Hu. I received my B.S. in Automation from Tsinghua University, where I was advised by Dr. Jiwen Lu.
Our research focus is in the realm of Trustworthy Machine Learning, a critical area that demands the infusion of trust throughout the machine learning lifecycle—from data acquisition and model development to deployment and user interaction. Within this overarching theme, I specialize in issues related to robustness, fairness, and explainability. Additionally, I collaborate closely with health informaticians from New Jersey Medical School, UTHealth, and Baylor College of Medicine, leveraging machine learning to address critical challenges in healthcare.
The Large Foundation Models (e.g., large language models, vision-language models, and vision-action models) bring new and complex challenges to the trustworthiness of machine learning. Our long-term objective is to develop computational regulatory frameworks for these generative machine learning systems to prevent potential misuse and ensure their responsible utilization.
Research Overview
4/2025: Our paper "Harnessing the Power of LLMs in Practice: A Survey on ChatGPT and Beyond" is now the all-time most-read article of ACM Transactions on Knowledge Discovery from Data (TKDD)!
2/2025: Receive 2,000 $ API credit from OpenAI to support our research on detecting and mitigating hallucination in VLMs.
2/2025: One paper accepted by SIGKDD Explorations, we proposed to mitigate shortcuts in natural language understanding tasks.
1/2025: One paper accepted by ICLR 2025, we proposed MRT to efficiently edit multimodal representations.
1/2025: Our tutorial about "Understanding Large-Scale Machine Learning Robustness under Paradigm Shift" is accepted by SDM 2025. See you in Virginia!
09/2024: Three papers have been accepted for EMNLP 2024, focusing on issues related to LLM intellectual property protection, bias mitigation, and building trustworthy agents.
07/2024: Our paper, Mitigating Relational Bias on Knowledge Graphs, has been accepted by TKDD.
04/2024: 🔥 Our paper, The Science of Detecting LLM-Generated Text, has been accepted for a cover paper in the April 2024 issue of Communications of the ACM (CACM)!
04/2024: Our research about machine learning for watermarking has been awarded the annual Hershel M. Rich Invention Award, congratulations to the team!
03/2024: One paper has been accepted by the NAACL 2024. We proposed a key prompt protection mechanism to safeguard large language models.
02/2024: One paper has been accepted by the Journal of Biomedical Informatics. We proposed a soft prompt calibration method to mitigate the variance of large language model output in the radiology report summarization task.
01/2024: Our paper, Harnessing the Power of LLMs in Practice: A Survey on ChatGPT and Beyond, has been accepted by TKDD.
10/2023: Our paper about using LLMs for patient-trial matching was selected as the AMIA 2023 Best Student Paper and KDDM Student Innovation Award.
10/2023: Our paper about building knowledge refinement and retrieval systems for interdisciplinarity in biomedical research has been selected as CIKM 2023 Best Demo Paper Honorable Mention.
10/2023: One paper has been accepted by EMNLP 2023. We introduce a membership inference aimed at Large Language Models to analyze associated privacy risks.
09/2023: Two papers have been accepted by NeurIPS 2023. We proposed a honeypot mechanism to defend against backdoor in language models.
08/2023: One paper has been accepted by ECML-PKDD 2023. We proposed a serial key protection mechanism for safeguarding DNN models.
08/2023: Two papers have been accepted by CIKM 2023. We proposed a transferable watermark for defending against model extraction.
07/2023: Three Papers have been accepted by AMIA 2023 Annual Symposium. We investigated methods for harnessing the capabilities of LLMs in several healthcare tasks, such as Named Entity Recognition, Relation Extraction, and Patient-Trial Matching.
05/2023: One paper has been accepted by ACL 2023. We reveal that LLMs are "lazy learners" that tend to exploit shortcuts in prompts for downstream tasks.
04/2023: One paper has been accepted by SIGKDD Exploration 2023. We proposed a clean-label backdoor-based watermarking framework for safeguarding training datasets.
03/2023: One paper has been accepted by ICHI 2023. We proposed a deep ensemble framework for improving phenotype prediction from multi-modal data.
Services
Program Committee Member / Reviewer:
Conference: ICML, NeurIPS, ICLR, ICCV, ARR, IJCAI, AAAI, SDM, CIKM, etc.
Journal: Nature Human Behavior, npj Digital Medicine, TIPAMI, TKDE, JAMIA, SNAM, etc.
Recognition:
Top Reviewer of NeurIPS 2023.