gpu-inference-server
Set up AI inference servers on cloud GPUs. Create private LLM APIs (vLLM, TGI), image generation endpoints, embedding services, and more. All with OpenAI-compatible interfaces that work with existing tools.
gpu-accelerationcloud-infrastructureapiimage-generation
gpu-cli
0
rag-systems
Build RAG systems - embeddings, vector stores, chunking, and retrieval optimization
embeddingsvector-storechunkingretrieval-augmented-generation
pluginagentmarketplace
1
nlp-basics
Process and analyze text using modern NLP techniques - preprocessing, embeddings, and transformers
preprocessingembeddingstransformersnatural-language-processing
pluginagentmarketplace
11