Sublink
  • Newest
  • Dashboard
    ©2023|Sublink|Privacy|Contact|

    The LLM arm race

    Introducing the breakthrough in LLM.

    Ke Fang
    Ke Fang

    Created over 1 year ago

    2 Subscribers
    GitHub - jmorganca/ollama: Get up and running with Llama 2 and other large language models locally

    GitHub - jmorganca/ollama: Get up and running with Llama 2 and other l...

    Get up and running with Llama 2 and other large language models locally - GitHub - jmorganca/ollama:...

    Added ago

    mlx-examples/mixtral

    mlx-examples/mixtral

    Run the Mixtral1 8x7B mixture-of-experts (MoE) model in MLX on Apple silicon.

    Added ago

    Phi-2: The surprising power of small language models

    Phi-2: The surprising power of small language models

    Phi-2 is now accessible on the Azure model catalog.

    Added ago

    Mistral 7B

    Mistral 7B

    The best 7B model to date, Apache 2.0

    Added ago

    Introducing Gemini: our largest and most capable AI model

    Introducing Gemini: our largest and most capable AI model

    Gemini is our most capable and general model, built to be multimodal and optimized for three differe...

    Added ago

    Hands-on with Gemini: Interacting with multimodal AI

    Hands-on with Gemini: Interacting with multimodal AI

    Gemini is our natively multimodal AI model capable of reasoning across text, images, audio, video an...

    Added ago

    Login to subscribe this collection.