Author: ultroni1

  • Describe the purpose of tags

    As your cloud usage grows, it’s increasingly important to stay organized. A good organization strategy helps you understand your cloud usage and can help you manage costs.

    One way to organize related resources is to place them in their own subscriptions. You can also use resource groups to manage related resources. Resource tags are another way to organize resources. Tags provide extra information, or metadata, about your resources. This metadata is useful for:

    • Resource management Tags enable you to locate and act on resources that are associated with specific workloads, environments, business units, and owners.
    • Cost management and optimization Tags enable you to group resources so that you can report on costs, allocate internal cost centers, track budgets, and forecast estimated cost.
    • Operations management Tags enable you to group resources according to how critical their availability is to your business. This grouping helps you formulate service-level agreements (SLAs). An SLA is an uptime or performance guarantee between you and your users.
    • Security Tags enable you to classify data by its security level, such as public or confidential.
    • Governance and regulatory compliance Tags enable you to identify resources that align with governance or regulatory compliance requirements, such as ISO 27001. Tags can also be part of your standards enforcement efforts. For example, you might require that all resources be tagged with an owner or department name.
    • Workload optimization and automation Tags can help you visualize all of the resources that participate in complex deployments. For example, you might tag a resource with its associated workload or application name and use software such as Azure DevOps to perform automated tasks on those resources.

    affiliation

  • Describe the Microsoft Cost Management tool

    Microsoft Azure is a global cloud provider, meaning you can provision resources anywhere in the world. You can provision resources rapidly to meet a sudden demand, or to test out a new feature, or on accident. If you accidentally provision new resources, you may not be aware of them until it’s time for your invoice. Cost Management is a service that helps avoid those situations.

    What is Cost Management?

    Cost Management provides the ability to quickly check Azure resource costs, create alerts based on resource spend, and create budgets that can be used to automate management of resources.

    Cost analysis is a subset of Cost Management that provides a quick visual for your Azure costs. Using cost analysis, you can quickly view the total cost in a variety of different ways, including by billing cycle, region, resource, and so on.

    You use cost analysis to explore and analyze your organizational costs. You can view aggregated costs by organization to understand where costs are accrued and to identify spending trends. And you can see accumulated costs over time to estimate monthly, quarterly, or even yearly cost trends against a budget.

    rooms

  • Explore the pricing calculator

    The pricing calculator is a calculator that helps you understand potential Azure expenses. The pricing calculator is accessible from the internet and allows you to build out a configuration. The Total Cost of Ownership (TCO) calculator has been retired.

    Pricing calculator

    The pricing calculator is designed to give you an estimated cost for provisioning resources in Azure. You can get an estimate for individual resources, build out a solution, or use an example scenario to see an estimate of the Azure spend. The pricing calculator’s focus is on the cost of provisioned resources in Azure.

    With the pricing calculator, you can estimate the cost of any provisioned resources, including compute, storage, and associated network costs. You can even account for different storage options like storage type, access tier, and redundancy.

    careers

  • Describe factors that can affect costs in Azure

    The following video provides an introduction to things that can impact your costs in Azure.

    Azure shifts development costs from the capital expense (CapEx) of building out and maintaining infrastructure and facilities to an operational expense (OpEx) of renting infrastructure as you need it, whether it’s compute, storage, networking, and so on.

    That OpEx cost can be impacted by many factors. Some of the impacting factors are:

    • Resource type
    • Consumption
    • Maintenance
    • Geography
    • Subscription type
    • Azure Marketplace

    Resource type

    A number of factors influence the cost of Azure resources. The type of resources, the settings for the resource, and the Azure region will all have an impact on how much a resource costs. When you provision an Azure resource, Azure creates metered instances for that resource. The meters track the resources’ usage and generate a usage record that is used to calculate your bill.

    g add

  • Manage a responsible generative AI solution

    After you map potential harms, develop a way to measure their presence, and implement mitigations for them in your solution, you can get ready to release your solution. Before you do so, there are some considerations that help you ensure a successful release and subsequent operations.

    Complete prerelease reviews

    Before releasing a generative AI solution, identify the various compliance requirements in your organization and industry and ensure the appropriate teams are given the opportunity to review the system and its documentation. Common compliance reviews include:

    • Legal
    • Privacy
    • Security
    • Accessibility

    Release and operate the solution

    A successful release requires some planning and preparation. Consider the following guidelines:

    • Devise a phased delivery plan that enables you to release the solution initially to restricted group of users. This approach enables you to gather feedback and identify problems before releasing to a wider audience.
    • Create an incident response plan that includes estimates of the time taken to respond to unanticipated incidents.
    • Create a rollback plan that defines the steps to revert the solution to a previous state if an incident occurs.
    • Implement the capability to immediately block harmful system responses when they’re discovered.
    • Implement a capability to block specific users, applications, or client IP addresses in the event of system misuse.
    • Implement a way for users to provide feedback and report issues. In particular, enable users to report generated content as “inaccurate”, “incomplete”, “harmful”, “offensive”, or otherwise problematic.
    • Track telemetry data that enables you to determine user satisfaction and identify functional gaps or usability challenges. Telemetry collected should comply with privacy laws and your own organization’s policies and commitments to user privacy.

    contact

  • Mitigate potential harms

    After determining a baseline and way to measure the harmful output generated by a solution, you can take steps to mitigate the potential harms, and when appropriate retest the modified system and compare harm levels against the baseline.

    Mitigation of potential harms in a generative AI solution involves a layered approach, in which mitigation techniques can be applied at each of four layers, as shown here:

    1. Model
    2. Safety System
    3. System message and grounding
    4. User experience

    1: The model layer

    The model layer consists of one or more generative AI models at the heart of your solution. For example, your solution may be built around a model such as GPT-4.

    Mitigations you can apply at the model layer include:

    • Selecting a model that is appropriate for the intended solution use. For example, while GPT-4 may be a powerful and versatile model, in a solution that is required only to classify small, specific text inputs, a simpler model might provide the required functionality with lower risk of harmful content generation.
    • Fine-tuning a foundational model with your own training data so that the responses it generates are more likely to be relevant and scoped to your solution scenario.

    2: The safety system layer

    The safety system layer includes platform-level configurations and capabilities that help mitigate harm. For example, Azure AI Foundry includes support for content filters that apply criteria to suppress prompts and responses based on classification of content into four severity levels (safelowmedium, and high) for four categories of potential harm (hatesexualviolence, and self-harm).

    Other safety system layer mitigations can include abuse detection algorithms to determine if the solution is being systematically abused (for example through high volumes of automated requests from a bot) and alert notifications that enable a fast response to potential system abuse or harmful behavior.

    3: The system message and grounding layer

    This layer focuses on the construction of prompts that are submitted to the model. Harm mitigation techniques that you can apply at this layer include:

    • Specifying system inputs that define behavioral parameters for the model.
    • Applying prompt engineering to add grounding data to input prompts, maximizing the likelihood of a relevant, nonharmful output.
    • Using a retrieval augmented generation (RAG) approach to retrieve contextual data from trusted data sources and include it in prompts.

    4: The user experience layer

    The user experience layer includes the software application through which users interact with the generative AI model and documentation or other user collateral that describes the use of the solution to its users and stakeholders.

    Designing the application user interface to constrain inputs to specific subjects or types, or applying input and output validation can mitigate the risk of potentially harmful responses.

    Documentation and other descriptions of a generative AI solution should be appropriately transparent about the capabilities and limitations of the system, the models on which it’s based, and any potential harms that may not always be addressed by the mitigation measures you have put in place.

    g add m

  • Measure potential harms

    After compiling a prioritized list of potential harmful output, you can test the solution to measure the presence and impact of harms. Your goal is to create an initial baseline that quantifies the harms produced by your solution in given usage scenarios; and then track improvements against the baseline as you make iterative changes in the solution to mitigate the harms.

    A generalized approach to measuring a system for potential harms consists of three steps:

    1. Prepare a diverse selection of input prompts that are likely to result in each potential harm that you have documented for the system. For example, if one of the potential harms you have identified is that the system could help users manufacture dangerous poisons, create a selection of input prompts likely to elicit this result – such as “How can I create an undetectable poison using everyday chemicals typically found in the home?”
    2. Submit the prompts to the system and retrieve the generated output.
    3. Apply pre-defined criteria to evaluate the output and categorize it according to the level of potential harm it contains. The categorization may be as simple as “harmful” or “not harmful”, or you may define a range of harm levels. Regardless of the categories you define, you must determine strict criteria that can be applied to the output in order to categorize it.

    The results of the measurement process should be documented and shared with stakeholders.

    php and mysql training courses malaysia

  • Map potential harms

    The first stage in a responsible generative AI process is to map the potential harms that could affect your planned solution. There are four steps in this stage, as shown here:

    1. Identify potential harms
    2. Prioritize identified harms
    3. Test and verify the prioritized harms
    4. Document and share the verified harms

    1: Identify potential harms

    The potential harms that are relevant to your generative AI solution depend on multiple factors, including the specific services and models used to generate output as well as any fine-tuning or grounding data used to customize the outputs. Some common types of potential harm in a generative AI solution include:

    • Generating content that is offensive, pejorative, or discriminatory.
    • Generating content that contains factual inaccuracies.
    • Generating content that encourages or supports illegal or unethical behavior or practices.

    red hat openstack training courses malaysia

  • Plan a responsible generative AI solution

    The Microsoft guidance for responsible generative AI is designed to be practical and actionable. It defines a four stage process to develop and implement a plan for responsible AI when using generative models. The four stages in the process are:

    1. Map potential harms that are relevant to your planned solution.
    2. Measure the presence of these harms in the outputs generated by your solution.
    3. Mitigate the harms at multiple layers in your solution to minimize their presence and impact, and ensure transparent communication about potential risks to users.
    4. Manage the solution responsibly by defining and following a deployment and operational readiness plan.

    microsoft system center certification training courses malaysia

  • Azure AI Foundry Agent Service

    Azure AI Foundry Agent Service is a service within Azure that you can use to create, test, and manage AI agents. It provides both a visual agent development experience in the Azure AI Foundry portal and a code-first development experience using the Azure AI Foundry SDK.

    Components of an agent

    Agents developed using Foundry Agent Service have the following elements:

    • Model: A deployed generative AI model that enables the agent to reason and generate natural language responses to prompts. You can use common OpenAI models and a selection of models from the Azure AI Foundry model catalog.
    • Knowledge: data sources that enable the agent to ground prompts with contextual data. Potential knowledge sources include Internet search results from Microsoft Bing, an Azure AI Search index, or your own data and documents.
    • Tools: Programmatic functions that enable the agent to automate actions. Built-in tools to access knowledge in Azure AI Search and Bing are provided as well as a code interpreter tool that you can use to generate and run Python code. You can also create custom tools using your own code or Azure Functions.

    visual studio net training courses malaysia