Home AI News Google Showcases Latest Research in Natural Language Processing at ACL 2023.

Google Showcases Latest Research in Natural Language Processing at ACL 2023.

0
Google Showcases Latest Research in Natural Language Processing at ACL 2023.

Google, a Diamond Level sponsor of ACL 2023, is participating in the 61st annual meeting of the Association for Computational Linguistics. This premier conference covers various research areas related to computational approaches to natural language. With over 50 publications and active involvement in workshops and tutorials, Google is showcasing its latest research in natural language processing and understanding.

If you’re registered for ACL 2023, visit the Google booth to learn more about the projects at Google that solve interesting problems for billions of people. Below, you can find Google’s affiliations and the individuals involved in the conference.

Board and Organizing Committee:
– Dan Garrette (Area Chair)
– Annie Louis (Workshop Chair)
– Lei Shu (Publication Chair)

Program Committee:
– Vinodkumar Prabhakaran
– Najoung Kim
– Markus Freitag

Spotlight Papers:
– NusaCrowd: Open Source Initiative for Indonesian NLP Resources
– Optimizing Test-Time Query Representations for Dense Retrieval
– PropSegmEnt: A Large-Scale Corpus for Proposition-Level Segmentation and Entailment Recognition

Papers:
– Searching for Needles in a Haystack: On the Role of Incidental Bilingualism in PaLM’s Translation Capability
– Prompting PaLM for Translation: Assessing Strategies and Performance
– Query Refinement Prompts for Closed-Book Long-Form QA
– To Adapt or to Annotate: Challenges and Interventions for Domain Adaptation in Open-Domain Question Answering
– FRMT: A Benchmark for Few-Shot Region-Aware Machine Translation
– Conditional Generation with a Question-Answering Blueprint
– Coreference Resolution Through a Seq2Seq Transition-Based System
– Cross-Lingual Transfer with Language-Specific Subnetworks for Low-Resource Dependency Parsing
– DAMP: Doubly Aligned Multilingual Parser for Task-Oriented Dialogue
– RARR: Researching and Revising What Language Models Say, Using Language Models
– Benchmarking Large Language Model Capabilities for Conditional Generation
– Crosslingual Generalization Through Multitask Fine-Tuning
– DisentQA: Disentangling Parametric and Contextual Knowledge with Counterfactual Question Answering
– Resolving Indirect Referring Expressions for Entity Selection
– SeeGULL: A Stereotype Benchmark with Broad Geo-Cultural Coverage Leveraging Generative Models
– The Tail Wagging the Dog: Dataset Construction Biases of Social Bias Benchmarks
– Character-Aware Models Improve Visual Text Rendering
– Cold-Start Data Selection for Better Few-Shot Language Model Fine-Tuning: A Prompt-Based Uncertainty Propagation Approach
– Covering Uncommon Ground: Gap-Focused Question Generation for Answer Assessment
– FormNetV2: Multimodal Graph Contrastive Learning for Form Document Information Extraction
– Dialect-Robust Evaluation of Generated Text
– Crosslingual Generalization Through Multitask Fine-Tuning
– Better Zero-Shot Reasoning with Self-Adaptive Prompting
– Factually Consistent Summarization via Reinforcement Learning with Textual Entailment Feedback
– Natural Language to Code Generation in Interactive Data Science Notebooks
– Teaching Small Language Models to Reason
– Using Domain Knowledge to Guide Dialog Structure Induction via Neural Probabilistic Soft Logic
– A Needle in a Haystack: An Analysis of High-Agreement Workers on MTurk for Summarization
– Federated Learning of Gboard Language Models with Differential Privacy
– KAFA: Rethinking Image Ad Understanding with Knowledge-Augmented Feature Adaptation of Vision-Language Models

This is just a glimpse of the exciting research being presented by Google at ACL 2023. Don’t miss out on the opportunity to explore these advancements in natural language processing and understanding.

Source link

LEAVE A REPLY

Please enter your comment!
Please enter your name here