WiP: Towards Light Adaptation of Large Language Models For Personal Hardware
Abstract
References
Index Terms
- WiP: Towards Light Adaptation of Large Language Models For Personal Hardware
Recommendations
Jailbreaking Pre-trained Large Language Models Towards Hardware Vulnerability Insertion Ability
GLSVLSI '24: Proceedings of the Great Lakes Symposium on VLSI 2024We introduce RTLAttack, the first prompt-based jailbreak model designed to activate the hardware attack capabilities of LLM. Unlike conventional approaches, RTLAttack combines LLM and hardware security traits, enabling models to execute sensitive tasks ...
Model tuning or prompt Tuning? a study of large language models for clinical concept and relation extraction
Graphical abstractDisplay Omitted
Abstract ObjectiveTo develop soft prompt-based learning architecture for large language models (LLMs), examine prompt-tuning using frozen/unfrozen LLMs, and assess their abilities in transfer learning and few-shot learning.
MethodsWe developed a soft ...
FlightLLM: Efficient Large Language Model Inference with a Complete Mapping Flow on FPGAs
FPGA '24: Proceedings of the 2024 ACM/SIGDA International Symposium on Field Programmable Gate ArraysTransformer-based Large Language Models (LLMs) have made a significant impact on various domains. However, LLMs' efficiency suffers from both heavy computation and memory overheads. Compression techniques like sparsification and quantization are commonly ...
Comments
Information & Contributors
Information
Published In
Sponsors
In-Cooperation
Publisher
Association for Computing Machinery
New York, NY, United States
Publication History
Check for updates
Author Tags
Qualifiers
- Short-paper
- Research
- Refereed limited
Conference
Contributors
Other Metrics
Bibliometrics & Citations
Bibliometrics
Article Metrics
- 0Total Citations
- 85Total Downloads
- Downloads (Last 12 months)85
- Downloads (Last 6 weeks)40
Other Metrics
Citations
View Options
Get Access
Login options
Check if you have access through your login credentials or your institution to get full access on this article.
Sign in