The last time I wrote the Issue was two weeks ago. It’s time for an update. There are many interesting news happend during this time. Let’s see. ## OpenAI - **Canvas** is OpenAI's innovative approach to working with ChatGPT for writing and coding. This new interface transcends simple chat interactions, offering a collaborative environment for complex projects. Canvas opens in a separate window, enabling seamless cooperation between you and ChatGPT. While it shares some similarities with Claude's Artifacts feature, Canvas has its unique characteristics. Notably, OpenAI has trained GPT-4 to function as a creative partner, capable of recognizing when to initiate Canvas, make precise edits, or completely rewrite content. For more information, visit: [Introducing Canvas](https://openai.com/index/introducing-canvas/) - **Realtime API** enables all paid developers to build low-latency, multimodal experiences in their apps. Similar to ChatGPAT’s Advanced Voice Mode, the Realtime API supports natural sppech-to-speech conversatons using the six preset voices already supported in the API. For more information, visit: [Introducing the Realtime API](https://openai.com/index/introducing-the-realtime-api/) - **whisper-large-v3-turbo** is a fine-tuned version of a pruned Whisper large-v3 recently released by OpenAI. This new model is significantly faster, albeit with a minor quality degradation. For more information, visit: [whisper-large-v3-turbo](https://github.com/openai/whisper/discussions/2363) - **Prompt Caching** allows developers to reduce costs and latency. By reusing recently seen input tokens, developers can get a 50% discount and faster prompt processing times. Here’s an overview of pricing: ![[image.png]] For more information, visit: [Prompt Caching](https://openai.com/index/api-prompt-caching/) - **Prompt Generator** is a new feature added to OpenAI's playground. You can now describe your intended use for a model, and the playground will automatically generate prompts and valid schemas for functions and structured outputs. To try this feature, visit: [Prompt Generator in the Playground](https://platform.openai.com/playground/chat?models=gpt-4o) - **MLE-bench** is a benchmark for assessing AI agents' performance in machine learning engineering. OpenAI has curated 75 ML engineering-related competitions from Kaggle, creating a diverse set of challenging tasks that evaluate real-world ML engineering skills such as training models, preparing datasets, and running experiments. To learn more about this benchmark, please visit: [MLE-bench: Evaluating Machine Learning Agents on Machine Learning Engineering](https://arxiv.org/abs/2410.07095) ## Black Forest Lab - Black Forest Labs has announced the release of **FLUX1.1 [pro]**, their most advanced and efficient generative model to date, along with the beta version of the BFL API. **FLUX1.1 [pro]** offers six times faster image generation compared to its predecessor, FLUX.1 [pro], with improved image quality, prompt adherence, and diversity. It is three times faster than the current FLUX.1 [pro] and has achieved the highest Elo score on the Artificial Analysis image arena benchmark. For more information, visit: [FLUX1.1 [pro]](https://blackforestlabs.ai/announcing-flux-1-1-pro-and-the-bfl-api/) ## Google - **Gemini 1.5 Flash-8B** is Google's latest Flash variant. This production-ready model offers lower prices, higher rate limits, and reduced latency for small prompts. You can access it for free via Google AI Studio. For more information, visit: [Gemini 1.5 Flash-8B](https://developers.googleblog.com/en/gemini-15-flash-8b-is-now-generally-available-for-use/) - For Google AI Studio, Google has introduced the ability to seamlessly drag and drop all file types directly into the platform, eliminating the need for uploading to Drive first. ## Anthropic - Anthropic introduces the Message Batches API—a new way to process large volumes of queries asynchronously in the Anthropic API. For more information, visit: [Message Batches (Beta)](https://docs.anthropic.com/en/docs/build-with-claude/message-batches)