Global web icon
github.com
https://github.com/mit-han-lab/streaming-llm/blob/…
streaming-llm/README.md at main · mit-han-lab/streaming-llm
[ICLR 2024] Efficient Streaming Language Models with Attention Sinks - streaming-llm/README.md at main · mit-han-lab/streaming-llm
Global web icon
github.com
https://github.com/mit-han-lab/streaming-llm/activ…
Activity · mit-han-lab/streaming-llm · GitHub
Guangxuan-Xiao pushed 1 commit • 6b6c5b0…bc0699b • on Oct 20, 2023 add slides Guangxuan-Xiao pushed 1 commit • 11164fb…6b6c5b0 • on Oct 19, 2023 Merge pull request #20 from tomaarsen/hotfix/move_to_model_device Pull request merge
Global web icon
github.com
https://github.com/mit-han-lab/streaming-llm/pull/…
Enable explictly setting transformer model cache #56 - GitHub
Add this suggestion to a batch that can be applied as a single commit. Applying suggestions on deleted lines is not supported. You must change the existing code in this line in order to create a valid suggestion. Outdated suggestions cannot be applied. This suggestion has been applied or marked resolved.
Global web icon
github.com
https://github.com/mit-han-lab/streaming-llm/blob/…
streaming-llm/streaming_llm/enable_streaming_llm.py at main - GitHub
[ICLR 2024] Efficient Streaming Language Models with Attention Sinks - mit-han-lab/streaming-llm
Global web icon
github.com
https://github.com/mit-han-lab/streaming-llm/pull/…
Enable explictly setting transformer model cache#56 - GitHub
Code Open JiaxuanYou wants to merge 1 commit into mit-han-lab:main from JiaxuanYou:main Copy head branch name to clipboard +1 Conversation Commits 1 (1) Checks Files changed
Global web icon
github.com
https://github.com/mit-han-lab/streaming-llm/issue…
Google Colab installation · Issue #8 · mit-han-lab/streaming-llm
👍 1 All reactions Guangxuan-Xiao closed this as completed on Oct 17, 2023 h3ndrik added a commit to h3ndrik/streaming-llm that referenced this issue on Oct 31, 2023
Global web icon
github.com
https://github.com/mit-han-lab/streaming-llm/issue…
b979594a04f1bbefe1ff21eb8affacef2a186d25 · Issue #26 · mit-han-lab ...
ghost changed the title https://github.com/mempool/mempool/commit/b979594a04f1bbefe1ff21eb8affacef2a186d25 b979594a04f1bbefe1ff21eb8affacef2a186d25 on Oct 12, 2023
Global web icon
github.com
https://github.com/mit-han-lab/streaming-llm/commi…
GitHub
+Deploying Large Language Models (LLMs) in streaming applications such as multi-round dialogue, where long interactions are expected, is urgently needed but poses two major challenges. Firstly, during the decoding stage, caching previous tokens' Key and Value states (KV) consumes extensive memory. Secondly, popular LLMs cannot generalize to longer texts than the training sequence length ...
Global web icon
github.com
https://github.com/mit-han-lab/streaming-llm/blob/…
streaming-llm/data/mt_bench.jsonl at main - GitHub
[ICLR 2024] Efficient Streaming Language Models with Attention Sinks - streaming-llm/data/mt_bench.jsonl at main · mit-han-lab/streaming-llm
Global web icon
github.com
https://github.com/mit-han-lab/streaming-llm/pull/…
Added requirements.txt with pinned package versions #4
Change base from KarimJedda: main +21 −0 Conversation 1 Commits 3 Checks 0 Files changed 2 Open Added requirements.txt with pinned package versions #4 Show file tree Hide file tree Changes from all commits Commits Show all changes 3 commits Select commit Hold shift + click to select a range