Home AI News Highlights from Apple’s Workshop on Natural Language Understanding and Privacy in AI

Highlights from Apple’s Workshop on Natural Language Understanding and Privacy in AI

0
Highlights from Apple’s Workshop on Natural Language Understanding and Privacy in AI

Earlier this year, Apple hosted a Workshop on Natural Language Understanding. This event brought together Apple and members of the academic research community to discuss the state of the art in natural language understanding.

One important topic discussed at the workshop was the challenge of balancing privacy with conversational systems. Researchers discussed solutions such as using parametric language models with a k-nearest-neighbor retrieval component to limit privacy leakage. These solutions are described in papers by Dr. Danqi Chen and Luke Zettlemoyer.

Another topic discussed was applying foundation models to production systems. These models contain a lot of data and require large computing clusters for processing. Researchers explored techniques such as model compression, weight pruning, and reducing precision to make the models more compact. This would allow them to be run on smaller devices while preserving user privacy.

There were also discussions on challenges with foundation models’ consistency and unsafe outputs. Research by Pascale Fung and others discussed unsafe outputs and the possibility of hallucinations in natural language generation. Researchers suggested grounding conversational agents in retrieving facts and using auxiliary models and systems to act as safeguards.

Another area of interest was using multimodal information in conversational systems. Researchers are exploring the use of prior interactions, screen information, gestures, gaze, and visual cues to reduce ambiguity in understanding. They also discussed the importance of contextual knowledge in natural dialogue and proposed approaches to include it in language generation.

The workshop also covered the use of foundation models to solve data synthesis problems. These models have shown the ability to generate synthetic data, which reduces the need for manual labeling and preserves privacy. However, challenges remain in measuring and ensuring the quality and diversity of the generated data.

Overall, the workshop provided valuable insights into the field of natural language understanding and the challenges and possibilities of using foundation models. Researchers shared their work and discussed potential solutions to improve the performance and privacy of conversational systems.

Related Videos
– “STAIR: Learning Sparse Text and Image Representation in Grounded Tokens,” Yinfei Yang (Apple)
– “Building Language Models with Modularity,” Noah Smith (University of Washington)
– “Model-Aided Human Annotation at Scale,” Hadas Kotek (Apple)
– “Prompting for a Conversation: How to Control a Dialog Model?” Yimai Fang (Apple)
– “Towards Practical Use of Large Pre-Trained Language Models: Addressing Errors and Inconsistencies,” Chris Manning (Stanford University)
– “Grounded Dialogue Generation with Cross-encoding Re-ranker, Grounding Span Prediction, and Passage Dropout,” Helen Meng (Chinese University of Hong Kong)

Related Papers
– “LLM.int8(): 8-bit Matrix Multiplication for Transformers at Scale” by Tim Dettmers, Mike Lewis, Younes Belkada, and Luke Zettlemoyer
– “Recovering Private Text in Federated Learning of Language Models” by Samyak Gupta, Yangsibo Huang, Zexuan Zhong, Tianyu Gao, Kai Li, and Danqi Chen
– “Survey of Hallucination in Natural Language Generation” by Ziwei Ji, Nayeon Lee, Rita Frieske, Tiezheng Yu, Dan Su, Yan Xu, Etsuko Ishii, Yejin Bang, Wenliang Dai, Andrea Madotto, and Pascale Fung
– “Scaling Up Visual and Vision-Language Representation Learning With Noisy Text Supervision” by Chao Jia, Yinfei Yang, Ye Xia, Yi-Ting Chen, Zarana Parekh, Hieu Pham, Quoc V. Le, Yunhsuan Sung, Zhen Li, and Tom Duerig
– “Generalization through Memorization: Nearest Neighbor Language Models” by Urvashi Khandelwal, Omer Levy, Dan Jurafsky, Luke Zettlemoyer, and Mike Lewis
– “Combining Compressions for Multiplicative Size Scaling on Natural Language Tasks” by Rajiv Movva, Jinhao Lei, Shayne Longpre, Ajay Gupta, and Chris DuBois
– “Generating Natural Questions from Images for Multimodal Assistants” by Alkesh Patel, Akanksha Bindal, Hadas Kotek, Christopher Klein, and Jason Williams
– “Training Language Models with Memory Augmentation” by Zexuan Zhong, Tao Lei, and Danqi Chen

Acknowledgements
Special thanks to Christopher Klein, David Q. Sun, Dhivya Piraviperumal, and others for their contributions to the workshop.

Source link

LEAVE A REPLY

Please enter your comment!
Please enter your name here