Author: ultroni1

  • Create a chat client

    A common scenario in an AI application is to connect to a generative AI model and use prompts to engage in a chat-based dialog with it.

    While you can use the Azure OpenAI SDK, to connect “directly” to a model using key-based or Microsoft Entra ID authentication; when your model is deployed in an Azure AI Foundry project, you can also use the Azure AI Foundry SDK to retrieve a project client, from which you can then get an authenticated OpenAI chat client for any models deployed in the project’s Azure AI Foundry resource. This approach makes it easy to write code that consumes models deployed in your project, switching between them easily by changing the model deployment name parameter.

    sap fico financial accounting training courses malaysia

  • Work with project connections

    Each Azure AI Foundry project includes connected resources, which are defined both at the parent (Azure AI Foundry resource or hub) level, and at the project level. Each resource is a connection to an external service, such as Azure storage, Azure AI Search, Azure OpenAI, or another Azure AI Foundry resource.

    Screenshot of the connected resources page in Azure AI Foundry portal.

    With the Azure AI Foundry SDK, you can connect to a project and retrieve connections; which you can then use to consume the connected services.

    For example, the AIProjectClient object in Python has a connections property, which you can use to access the resource connections in the project. Methods of the connections object include:

    • connections.list(): Returns a collection of connection objects, each representing a connection in the project. You can filter the results by specifying an optional connection_type parameter with a valid enumeration, such as ConnectionType.AZURE_OPEN_AI.
    • connections.get(connection_name, include_credentials): Returns a connection object for the connection with the name specified. If the include_credentials parameter is True (the default value), the credentials required to connect to the connection are returned – for example, in the form of an API key for an Azure AI services resource.

    The connection objects returned by these methods include connection-specific properties, including credentials, which you can use to connect to the associated resource.

    The following code example lists all of the resource connections that have been added to a project:

    PythonCopy

    from azure.identity import DefaultAzureCredential
    from azure.ai.projects import AIProjectClient
    
    try:
    
        # Get project client
        project_endpoint = "https://....."
        project_client = AIProjectClient(            
                credential=DefaultAzureCredential(),
                endpoint=project_endpoint,
            )
        
        ## List all connections in the project
        connections = project_client.connections
        print("List all connections:")
        for connection in connections.list():
            print(f"{connection.name} ({connection.type})")
    
    except Exception as ex:
        print(ex)

    sap s 4 hana training courses malaysia

  • What is the Azure AI Foundry SDK?

    Azure AI Foundry provides a REST API that you can use to work with AI Foundry projects and the resources they contain. Additionally, multiple language-specific SDKs are available, enabling developers to write code that uses resources in an Azure AI Foundry project in their preferred development language. With an Azure AI Foundry SDK, developers can create applications that connect to a project, access the resource connections and models in that project, and use them to perform AI operations, such as sending prompts to a generative AI model and processing the responses.

    The core package for working with projects is the Azure AI Projects library, which enables you to connect to an Azure AI Foundry project and access the resources defined within it. Available language-specific packages the for Azure AI Projects library include:

    sap supply chain management scm training courses malaysia

  • Describe the purpose of tags

    As your cloud usage grows, it’s increasingly important to stay organized. A good organization strategy helps you understand your cloud usage and can help you manage costs.

    One way to organize related resources is to place them in their own subscriptions. You can also use resource groups to manage related resources. Resource tags are another way to organize resources. Tags provide extra information, or metadata, about your resources. This metadata is useful for:

    • Resource management Tags enable you to locate and act on resources that are associated with specific workloads, environments, business units, and owners.
    • Cost management and optimization Tags enable you to group resources so that you can report on costs, allocate internal cost centers, track budgets, and forecast estimated cost.
    • Operations management Tags enable you to group resources according to how critical their availability is to your business. This grouping helps you formulate service-level agreements (SLAs). An SLA is an uptime or performance guarantee between you and your users.
    • Security Tags enable you to classify data by its security level, such as public or confidential.
    • Governance and regulatory compliance Tags enable you to identify resources that align with governance or regulatory compliance requirements, such as ISO 27001. Tags can also be part of your standards enforcement efforts. For example, you might require that all resources be tagged with an owner or department name.
    • Workload optimization and automation Tags can help you visualize all of the resources that participate in complex deployments. For example, you might tag a resource with its associated workload or application name and use software such as Azure DevOps to perform automated tasks on those resources.

    sap wm warehouse management training courses malaysia

  • Describe the Microsoft Cost Management tool

    Microsoft Azure is a global cloud provider, meaning you can provision resources anywhere in the world. You can provision resources rapidly to meet a sudden demand, or to test out a new feature, or on accident. If you accidentally provision new resources, you may not be aware of them until it’s time for your invoice. Cost Management is a service that helps avoid those situations.

    What is Cost Management?

    Cost Management provides the ability to quickly check Azure resource costs, create alerts based on resource spend, and create budgets that can be used to automate management of resources.

    Cost analysis is a subset of Cost Management that provides a quick visual for your Azure costs. Using cost analysis, you can quickly view the total cost in a variety of different ways, including by billing cycle, region, resource, and so on.

    You use cost analysis to explore and analyze your organizational costs. You can view aggregated costs by organization to understand where costs are accrued and to identify spending trends. And you can see accumulated costs over time to estimate monthly, quarterly, or even yearly cost trends against a budget.

    microsoft sharepoint certification training courses malaysia

  • Explore the pricing calculator

    The pricing calculator is a calculator that helps you understand potential Azure expenses. The pricing calculator is accessible from the internet and allows you to build out a configuration. The Total Cost of Ownership (TCO) calculator has been retired.

    Pricing calculator

    The pricing calculator is designed to give you an estimated cost for provisioning resources in Azure. You can get an estimate for individual resources, build out a solution, or use an example scenario to see an estimate of the Azure spend. The pricing calculator’s focus is on the cost of provisioned resources in Azure.

    With the pricing calculator, you can estimate the cost of any provisioned resources, including compute, storage, and associated network costs. You can even account for different storage options like storage type, access tier, and redundancy.

    microsoft sql server certification training courses malaysia

  • Describe factors that can affect costs in Azure

    Azure shifts development costs from the capital expense (CapEx) of building out and maintaining infrastructure and facilities to an operational expense (OpEx) of renting infrastructure as you need it, whether it’s compute, storage, networking, and so on.

    That OpEx cost can be impacted by many factors. Some of the impacting factors are:

    • Resource type
    • Consumption
    • Maintenance
    • Geography
    • Subscription type
    • Azure Marketplace

    Resource type

    A number of factors influence the cost of Azure resources. The type of resources, the settings for the resource, and the Azure region will all have an impact on how much a resource costs. When you provision an Azure resource, Azure creates metered instances for that resource. The meters track the resources’ usage and generate a usage record that is used to calculate your bill.

    Examples

    With a storage account, you specify a type such as blob, a performance tier, an access tier, redundancy settings, and a region. Creating the same storage account in different regions may show different costs and changing any of the settings may also impact the price.With a virtual machine (VM), you may have to consider licensing for the operating system or other software, the processor and number of cores for the VM, the attached storage, and the network interface. Just like with storage, provisioning the same virtual machine in different regions may result in different costs.

    vmware certification training courses malaysia

  • Optimize model performance

    After you deploy your model to an endpoint, you can start interacting with it to see how it works. Let’s explore how you can use prompt engineering techniques to optimize your model’s performance.

    Apply prompt patterns to optimize your model’s output

    The quality of the questions you send to the language model, directly influences the quality of the responses you get back. You can carefully construct your question, or prompt, to receive better and more interesting responses. The process of designing and optimizing prompts to improve the model’s performance is also known as prompt engineering.

    Prompt engineering requires users to ask relevant, specific, unambiguous, and well-structured questions, instructing the model to generate more accurate responses. To understand how to create well-defined prompts, let’s explore some patterns that help you improve the output of a model:

    • Instruct the model to act as a persona.
    • Guide the model to suggest better questions.
    • Provide a template to generate output in a specific format.
    • Understand how a model reasons by asking it to reflect.
    • Add context to improve the accuracy of the model’s output.

    microsoft windows server certification training courses malaysia

  • Deploy a model to an endpoint

    When you develop a generative AI app, you need to integrate language models into your application. To be able to use a language model, you need to deploy the model. Let’s explore how to deploy language models in the Azure AI Foundry, after first understanding why to deploy a model.

    Why deploy a model?

    You train a model to generate output based on some input. To get value out of your model, you need a solution that allows you to send input to the model, which the model processes, after which the output is visualized for you.

    With generative AI apps, the most common type of solution is a chat application that expects a user question, which the model processes, to generate an adequate response. The response is then visualized to the user as a response to their question.

    You can integrate a language model with a chat application by deploying the model to an endpoint. An endpoint is a specific URL where a deployed model or service can be accessed. Each model deployment typically has its own unique endpoint, which allows different applications to communicate with the model through an API (Application Programming Interface).

    agile project management certification training courses malaysia

  • Explore the model catalog

    The model catalog in Azure AI Foundry provides a central repository of models that you can browse to find the right language model for your particular generative AI use case.

    Screenshot of the model catalog in Azure AI Foundry portal.

    Selecting a foundation model for your generative AI app is important as it affects how well your app works. To find the best model for your app, you can use a structured approach by asking yourself the following questions:

    • Can AI solve my use case?
    • How do I select the best model for my use case?
    • Can I scale for real-world workloads?

    Let’s explore each of these questions.

    Can AI solve my use case?

    Nowadays we have thousands of language models to choose from. The main challenge is to understand if there’s a model that satisfies your needs and to answer the question: Can AI solve my use case?

    To start answering this question, you need to discover, filter, and deploy a model. You can explore the available language models through three different catalogs:

    • Hugging Face: Vast catalog of open-source models across various domains.
    • GitHub: Access to diverse models via GitHub Marketplace and GitHub Copilot.
    • Azure AI Foundry: Comprehensive catalog with robust tools for deployment.

    Though you can use each of these catalogs to explore models, the model catalog in Azure AI Foundry makes it easiest to explore and deploy a model to build you prototype, while offering the best selection of models.

    Let’s explore some of the options you need to consider when searching for suitable models.

    Choose between large and small language models

    First of all, you have a choice between Large Language Models (LLMs) and Small Language Models (SLMs).

    • LLMs like GPT-4, Mistral Large, Llama3 70B, Llama 405B, and Command R+ are powerful AI models designed for tasks that require deep reasoning, complex content generation, and extensive context understanding.
    • SLMs like Phi3, Mistral OSS models, and Llama3 8B are efficient and cost-effective, while still handling many common Natural Language Processing (NLP) tasks. They’re perfect for running on lower-end hardware or edge devices, where cost and speed are more important than model complexity.

    Focus on a modality, task, or tool

    Language models like GPT-4 and Mistral Large are also known as chat completion models, designed to generate coherent and contextually appropriate text-based responses. When you need higher levels of performance in complex tasks like math, coding, science, strategy, and logistics, you can also use reasoning models like DeepSeek-R1 and o1.

    Beyond text-based AI, some models are multi-modal, meaning they can process images, audio, and other data types alongside text. Models like GPT-4o and Phi3-vision are capable of analyzing and generating both text and images. Multi-modal models are useful when your application needs to process and understand images, such as in computer vision or document analysis. Or when you want to build an AI app that interacts with visual content, such as a digital tutor explaining images or charts.

    If your use case involves generating images, tools like DALL·E 3 and Stability AI can create realistic visuals from text prompts. Image generation models are great for designing marketing materials, illustrations, or digital art.

    Another group of task-specific models are embedding models like Ada and Cohere. Embeddings models convert text into numerical representations and are used to improve search relevance by understanding semantic meaning. These models are often implemented in Retrieval Augmented Generation (RAG) scenarios to enhance recommendation engines by linking similar content.

    When you want to build an application that interacts with other software tools dynamically, you can add function calling and JSON support. These capabilities allow AI models to work efficiently with structured data, making them useful for automating API calls, database queries, and structured data processing.

    Specialize with regional and domain-specific models

    Certain models are designed for specific languages, regions, or industries. These models can outperform general-purpose generative AI in their respective domains. For example:

    • Core42 JAIS is an Arabic language LLM, making it the best choice for applications targeting Arabic-speaking users.
    • Mistral Large has a strong focus on European languages, ensuring better linguistic accuracy for multilingual applications.
    • Nixtla TimeGEN-1 specializes in time-series forecasting, making it ideal for financial predictions, supply chain optimization, and demand forecasting.

    If your project has regional, linguistic, or industry-specific needs, these models can provide more relevant results than general-purpose AI.

    Balance flexibility and performance with open versus proprietary models

    You also need to decide whether to use open-source models or proprietary models, each with its own advantages.

    Proprietary models are best for cutting-edge performance and enterprise use. Azure offers models like OpenAI’s GPT-4, Mistral Large, and Cohere Command R+, which deliver industry-leading AI capabilities. These models are ideal for businesses needing enterprise-level security, support, and high accuracy.

    Open-source models are best for flexibility and cost-efficiency. There are hundreds of open-source models available in the Azure AI Foundry model catalog from Hugging Face, and models from Meta, Databricks, Snowflake, and Nvidia. Open models give developers more control, allowing fine-tuning, customization, and local deployment.

    Whatever model you choose, you can use the Azure AI Foundry model catalog. Using models through the model catalog meets the key enterprise requirements for usage:

    • Data and privacy: you get to decide what happens with your data.
    • Security and compliance: built-in security.
    • Responsible AI and content safety: evaluations and content safety.

    Now you know the language models that are available to you, you should have an understanding of whether AI can indeed solve your use case. If you think a language model would enrich your application, you then need to select the specific model that you want to deploy and integrate.

    red hat certified system administrator rhcsa malaysia