March 30, 2026Machine learning Deliver hyper-personalized viewer experiences with an agentic AI movie assistant using Amazon Bedrock AgentCore and Amazon Nova Sonic 2.0 Deliver hyperpersonalized viewer experiences with an agentic AI movie assistant using Amazon […] Read more
March 30, 2026Machine learning Build a solar flare detection system on SageMaker AI LSTM networks and ESA STIX data The effective monitoring and characterization of solar flares demands sophisticated analysis of […] Read more
March 26, 2026Machine learning Run Generative AI inference with Amazon Bedrock in Asia Pacific (New Zealand) Kia ora! Customers in New Zealand have been asking for access to […] Read more
March 26, 2026Machine learning Introducing Amazon Polly Bidirectional Streaming: Real-time speech synthesis for conversational AI Building natural conversational experiences requires speech synthesis that keeps pace with real-time […] Read more
March 26, 2026Machine learning Accelerating LLM fine-tuning with unstructured data using SageMaker Unified Studio and S3 Last year, AWS announced an integration between Amazon SageMaker Unified Studio and […] Read more
March 26, 2026Machine learning Building age-responsive, context-aware AI with Amazon Bedrock Guardrails As you deploy generative AI applications to diverse user groups, you might […] Read more
March 25, 2026Machine learning Deploy voice agents with Pipecat and Amazon Bedrock AgentCore Runtime – Part 1 This post is a collaboration between AWS and Pipecat. Deploying intelligent voice […] Read more
March 25, 2026Machine learning Unlocking video insights at scale with Amazon Bedrock multimodal models Video content is now everywhere, from security surveillance and media production to […] Read more
March 25, 2026Machine learning Reinforcement fine-tuning on Amazon Bedrock with OpenAI-Compatible APIs: a technical walkthrough In December 2025, we announced the availability of Reinforcement fine-tuning (RFT) on […] Read more
March 24, 2026Machine learning Deploy SageMaker AI inference endpoints with set GPU capacity using training plans Deploying large language models (LLMs) for inference requires reliable GPU capacity, especially […] Read more