Author: ultroni1

  • Develop an object detection client application

    After you’ve trained an object detection model, you can use the Azure AI Custom Vision SDK to develop a client application that submits new images to be analyzed.

    C#Copy

    using System;
    using System.IO;
    using Microsoft.Azure.CognitiveServices.Vision.CustomVision.Prediction;
    
    // Authenticate a client for the prediction API
    CustomVisionPredictionClient prediction_client = new CustomVisionPredictionClient(new ApiKeyServiceClientCredentials("<YOUR_PREDICTION_RESOURCE_KEY>"))
    {
        Endpoint = "<YOUR_PREDICTION_RESOURCE_ENDPOINT>"
    };
    
    // Get classification predictions for an image
    MemoryStream image_data = new MemoryStream(File.ReadAllBytes("<PATH_TO_IMAGE_FILE>"));
    var result = prediction_client.DetectImage("<YOUR_PROJECT_ID>",
                                                 "<YOUR_PUBLISHED_MODEL_NAME>",
                                                 image_data);
    
    // Process predictions
    foreach (var prediction in result.Predictions)
    {
        if (prediction.Probability > 0.5)
        {
            var left = prediction.BoundingBox.Left;
            var top = prediction.BoundingBox.Top;
            var height = prediction.BoundingBox.Height;
            var width =  prediction.BoundingBox.Width;
            Console.WriteLine($"{prediction.TagName} ({prediction.Probability})");
            Console.WriteLine($"  Left:{left}, Top:{top}, Height:{height}, Width:{width}");
        }
    }

    crm

  • Train an object detector

    Object detection is a form of computer vision in which a model is trained to detect the presence and location of one or more classes of object in an image.

    Photograph with the location and type of fruits detected.

    There are two components to an object detection prediction:

    • The class label of each object detected in the image. For example, you might ascertain that an image contains an apple, an orange, and a banana.
    • The location of each object within the image, indicated as coordinates of a bounding box that encloses the object.

    To train an object detection model, you can use the Azure AI Custom Vision portal to upload and label images before training, evaluating, testing, and publishing the model; or you can use the REST API or a language-specific SDK to write code that performs the training tasks.

    contact

  • Use Azure AI Custom Vision for object detection

    To use the Custom Vision service to create an object detection solution, you need two Custom Vision resources in your Azure subscription:

    • An Azure AI Custom Vision training resource – used to train a custom model based on your own training images.
    • An Azure AI Custom Vision prediction resource – used to generate predictions from new images based on your trained model.

    When you provision the Azure AI Custom Vision service in an Azure subscription, you can choose to create one or both of these resources. This separation of training and prediction provides flexibility. For example, you can use a training resource in one region to train your model using your own image data; and then deploy one or more prediction resources in other regions to support computer vision applications that need to use your model.

    Each resource has its own unique endpoint and authentication keys; which are used by client applications to connect and authenticate to the service.

    applications services

  • Identify Azure Cosmos DB APIs

    Azure Cosmos DB is Microsoft’s fully managed and serverless distributed database for applications of any size or scale, with support for both relational and non-relational workloads. Developers can build and migrate applications fast using their preferred open source database engines, including PostgreSQL, MongoDB, and Apache Cassandra. When you provision a new Cosmos DB instance, you select the database engine that you want to use. The choice of engine depends on many factors including the type of data to be stored, the need to support existing applications, and the skills of the developers who work with the data store.

    Azure Cosmos DB for NoSQL

    Azure Cosmos DB for NoSQL is Microsoft’s native non-relational service for working with the document data model. It manages data in JSON document format, and despite being a NoSQL data storage solution, uses SQL syntax to work with the data.

    about 2

  • Describe Azure Cosmos DB

    Azure Cosmos DB supports multiple application programming interfaces (APIs) that enable developers to use the programming semantics of many common kinds of data store to work with data in a Cosmos DB database. The internal data structure is abstracted, enabling developers to use Cosmos DB to store and query data using APIs with which they’re already familiar.

    Cosmos DB uses indexes and partitioning to provide fast read and write performance and can scale to massive volumes of data. You can enable multi-region writes, adding the Azure regions of your choice to your Cosmos DB account so that globally distributed users can each work with data in their local replica.

    about

  • Put responsible AI frameworks in action

    As discussed in the previous unit, Microsoft has developed and refined its own internal process to govern AI responsibly. This unit explains how this governance system works in a real situation. While every organization needs its own unique governance frameworks and review processes, we believe that our sensitive use framework can serve as a helpful starting point. One of Microsoft’s early steps in our responsible AI governance process was to use a sensitive uses review trigger. The framework helped our internal and customer-facing teams identify when specific use cases need more guidance.

    Microsoft sensitive use case framework

    Per our responsible AI governance documentation, we consider an AI development or deployment scenario a “sensitive use” if it falls into one or more of the following categories:

    • Denial of consequential services: The scenario involves the use of AI in a way that may directly result in the denial of consequential services or support to an individual (for example, financial, housing, insurance, education, employment, or healthcare services).
    • Risk of harm: The scenario involves the use of AI in a way that may create a significant risk of physical, emotional, or psychological harm to an individual (for example, life or death decisions in military, safety-critical manufacturing environments, healthcare contexts, almost any scenario involving children or other vulnerable people, and so on).
    • Infringement on human rights: The scenario involves the use of AI in a way that may result in a significant restriction of personal freedom, opinion or expression, assembly or association, privacy, and so on (for example, in law enforcement or policing).

    We train our employees to use this framework to determine whether an AI use case should be flagged for further review—whether they’re a seller working with a customer or someone working on an internal AI solution. We also train our Responsible AI Champs for their role as liaison between employees and central governance teams.

    design

  • Responsible AI at Microsoft

    It can be challenging to design and implement an effective AI governance system. In this unit, we take Microsoft as the example and explain how Microsoft ensures responsible AI is followed across the company. Based on this use case, consider how you could apply these ideas in your own organization.

    sample page

  • Applying systems for AI governance

    AI governance engagement

    The specific processes and policies for your AI governance system depend on whether your company is using third-party systems or developing AI in-house. Based on this factor, we have provided recommendations to help your company govern your AI engagements.

    Engagement with AI systems developers

    For first-party AI systems, if your organization also plans to develop AI solutions or integrate AI into your existing products and services, there are some tasks for each team role.

    Your ethical governance system should:

    • Review or provide advice before the release of any new AI system, especially for sensitive use cases.
    • Ensure employees from all levels of the company feel free to surface ethical concerns before you sell AI or AI-integrated products and services.
    • Analyze the case and provide guidance to mitigate the risks if concerns arise while designing, developing, or selling the AI system.
    • Create processes to monitor the AI systems you deploy or sell to detect and mitigate model drift and decay over time.

    Your developers should:

    • Be given detailed and thorough standard guidance that can help them design and develop AI solutions to reflect your organization’s ethical principles.
    • Have guidelines and checklists for specific AI technologies, such as face recognition or generative AI.

    For organizations planning on using out-of-the-box third-party AI systems, we recommend learning about the third party’s commitment to responsible AI design to ensure it aligns with your own principles.

    For custom AI solutions, include your principles or standards in your request for proposal. Before deploying any third-party AI solution, create guidelines on how to safely operate and monitor the system. Train employees on these guidelines and ensure they’re being followed. Finally, your governance system should ensure the AI system has been rigorously tested.

    vmware vsphere install configure manage v8 icm malaysia

  • Design a system for AI governance

    Each organization has their own guiding principles, but ultimately these principles need to be part of a larger responsible AI strategy to be effective. This strategy should encompass how your organization brings these principles to life both within your organization and beyond.

    We recommend establishing a governance system that is tailored to your organization’s unique characteristics, culture, guiding principles, and level of engagement with AI. The tasks of the board should include designing responsible AI policies and measures; attending they’re being followed, and ensuring compliance.

    To help your organization get started, we have provided an overview of three common governance approaches: hiring a Chief Ethics Officer, establishing an ethics office, and forming an ethics committee. The first approach is centralized, and the others are decentralized. All of them have their benefits, but we recommend combining them in a hybrid approach. A governance system that reports to the board of directors and has financial support, human resources, and authority is more likely to create real change across an organization.

    ccna certification training courses malaysia

  • Identify guiding principles for responsible AI

    In the last unit, we discussed some of the societal implications of AI. We touched on the responsibility of businesses, governments, NGOs, and academic researchers to anticipate and mitigate unintended consequences of AI technology. As organizations consider these responsibilities, more are creating internal policies and practices to guide their AI efforts.

    At Microsoft, we’ve recognized six principles that we believe should guide AI development and use: fairness, reliability and safety, privacy and security, inclusiveness, transparency, and accountability. For us, these principles are the cornerstone of a responsible and trustworthy approach to AI, especially as intelligent technology becomes more prevalent in the products and services we use every day.

    clients