Generative AI Weekly Research Highlights | Jan'24 Part 3
Disclaimer: The content in this video is AI-generated and adheres to YouTube's guidelines. Each video undergoes manual review and curation before publishing to ensure accuracy and quality.
1. The What, Why, and How of Context Length Extension Techniques in Large Language Models – A Detailed Survey
[Link](https://arxiv.org/pdf/2401.07872.pdf)
2. AttackEval: How to Evaluate the Effectiveness of Jailbreak Attacking on Large Language Models
[Link](https://arxiv.org/pdf/2401.09002.pdf)
3. Large Language Models are Null-Shot Learners
[Link](https://arxiv.org/pdf/2401.08273.pdf)
4. Gender Bias in Machine Translation and The Era of Large Language Models
[Link](https://arxiv.org/pdf/2401.10016.pdf)
5. Leveraging Biases in Large Language Models: “Bias-kNN” for Effective Few-Shot Learning
[Link](https://arxiv.org/pdf/2401.09783.pdf)
6. Are Self-Explanations from Large Language Models Faithful?
[Link](https://arxiv.org/pdf/2401.07927.pdf)
7. Silent Guardian: Protecting Text from Malicious Exploitation by Large Language Models
[Link](https://arxiv.org/pdf/2312.09669.pdf)
8. Watch Your Language: Investigating Content Moderation with Large Language Models
[Link](https://arxiv.org/pdf/2309.14517.pdf)
00:00 Intro
00:20 Extending LLM Context Length
00:43 Assessing Jailbreak Attacks on LLMs
01:08 Null Shot Prompting: A New Technique
01:27 Tackling Gender Bias in Translation
01:50 BiasKAndN: Leveraging LLM Biases
02:12 Self-Explanations in LLMs
02:30 Silent Guardian: Protecting Text Data
02:53 LLMs in Content Moderation
03:22 End
#generativeai,#promptengineering,#largelanguagemodels,#openai,#chatgpt,#gpt4,#ai,#abcp,#prompt,#responsibleai,#promptengineer,