Author: ultroni1

  • Explore file storage

    The ability to store data in files is a core element of any computing system. Files can be stored in local file systems on the hard disk of your personal computer, and on removable media such as USB drives; but in most organizations, important data files are stored centrally in some kind of shared file storage system. Increasingly, that central storage location is hosted in the cloud, enabling cost-effective, secure, and reliable storage for large volumes of data.

    The specific file format used to store data depends on a number of factors, including:

    • The type of data being stored (structured, semi-structured, or unstructured).
    • The applications and services that will need to read, write, and process the data.
    • The need for the data files to be readable by humans, or optimized for efficient storage and processing.

    Some common file formats are discussed below.

    Delimited text files

    Data is often stored in plain text format with specific field delimiters and row terminators. The most common format for delimited data is comma-separated values (CSV) in which fields are separated by commas, and rows are terminated by a carriage return / new line. Optionally, the first line may include the field names. Other common formats include tab-separated values (TSV) and space-delimited (in which tabs or spaces are used to separate fields), and fixed-width data in which each field is allocated a fixed number of characters. Delimited text is a good choice for structured data that needs to be accessed by a wide range of applications and services in a human-readable format.

    web development

  • Identify data formats

    Data is a collection of facts such as numbers, descriptions, and observations used to record information. Data structures in which this data is organized often represent entities that are important to an organization (such as customers, products, sales orders, and so on). Each entity typically has one or more attributes, or characteristics (for example, a customer might have a name, an address, a phone number, and so on).

    You can classify data as structuredsemi-structured, or unstructured.

    Structured data

    Structured data is data that adheres to a fixed schema, so all of the data has the same fields or properties. Most commonly, the schema for structured data entities is tabular – in other words, the data is represented in one or more tables that consist of rows to represent each instance of a data entity, and columns to represent attributes of the entity. For example, the following image shows tabular data representations for Customer and Product entities.

    testing

  • Responsible AI considerations for face-based solutions

    While all applications of artificial intelligence require considerations for responsible, system that rely on facial or other biometric data can be particularly problematic.

    When building a solution that uses facial data, considerations include (but aren’t limited to):

    • Data privacy and security. Facial data is personally identifiable, and should be considered sensitive and private. You should ensure that you have implemented adequate protection for facial data used for model training and inferencing.
    • Transparency. Ensure that users are informed about how their facial data is used, and who will have access to it.
    • Fairness and inclusiveness. Ensure that your face-based system can’t be used in a manner that is prejudicial to individuals based on their appearance, or to unfairly target individuals.

    supply chain

  • Verify and identify faces

    Verifying faces

    When a face is detected by the Face service, a unique ID is assigned to it and retained in the service resource for 24 hours. The ID is a GUID, with no indication of the individual’s identity other than their facial features.

    While the detected face ID is cached, subsequent images can be used to compare the new faces to the cached identity and determine if they’re similar (in other words, they share similar facial features) or to verify that the same person appears in two images.

    Diagram of a detected face matched in two images.

    This ability to compare faces anonymously can be useful in systems where it’s important to confirm that the same person is present on two occasions, without the need to know the actual identity of the person. For example, by taking images of people as they enter and leave a secured space to verify that everyone who entered leaves.

    Identifying faces

    For scenarios where you need to positively identify individuals, you can train a facial recognition model using face images.

    To train a facial recognition model with the Face service:

    1. Create a Person Group that defines the set of individuals you want to identify (for example, employees).
    2. Add a Person to the Person Group for each individual you want to identify.
    3. Add detected faces from multiple images to each person, preferably in various poses. The IDs of these faces will no longer expire after 24 hours (so they’re now referred to as persisted faces).
    4. Train the model.

    staff augmentation service

  • Detect and analyze faces

    To use the Azure AI Vision Face API, you must provision a resource for the service in an Azure subscription. You can provision Face as a single-service resource, or you can use the Face API in a multi-service Azure AI Services resource; which can be provisioned as a standalone resource or as part of an Azure AI Foundry hub.

    To use your resource from a client application you must connect to its endpoint using either key-based authentication or Microsoft Entra AI authentication. When using the REST interface you can provide the authentication key or token in the request header. When using a language-specific SDK (for example, the Python azure-ai-vision-face package or the Microsoft .NET Azure.AI.Vision.Face package), you use a FaceClient object to connect to the service.

    software development

  • Plan a face detection, analysis, or recognition solution

    The Face service provides functionality that you can use for:

    1. Face detection – for each detected face, the results include an ID that identifies the face and the bounding box coordinates indicating its location in the image.
    2. Face attribute analysis – you can return a wide range of facial attributes, including:
      • Head pose (pitchroll, and yaw orientation in 3D space)
      • Glasses (No glassesReading glassesSunglasses, or Swimming Goggles)
      • Mask (the presence of a face mask)
      • Blur (lowmedium, or high)
      • Exposure (under exposuregood exposure, or over exposure)
      • Noise (visual noise in the image)
      • Occlusion (objects obscuring the face)
      • Accessories (glasses, headwear, mask)
      • QualityForRecognition (lowmedium, or high)
    3. Facial landmark location – coordinates for key landmarks in relation to facial features (for example, eye corners, pupils, tip of nose, and so on)
    4. Face comparison – you can compare faces across multiple images for similarity (to find individuals with similar facial features) and verification (to determine that a face in one image is the same person as a face in another image)
    5. Facial recognition – you can train a model with a collection of faces belonging to specific individuals, and use the model to identify those people in new images.
    6. Facial liveness – liveness can be used to determine if the input video is a real stream or a fake to prevent bad-intentioned individuals from spoofing a facial recognition system.

    school management

  • Automated evaluations

    Automated evaluations in Azure AI Foundry portal enable you to assess the quality and content safety performance of models, datasets, or prompt flows.

    Evaluation data

    To evaluate a model, you need a dataset of prompts and responses (and optionally, expected responses as “ground truth”). You can compile this dataset manually or use the output from an existing application; but a useful way to get started is to use an AI model to generate a set of prompts and responses related to a specific subject. You can then edit the generated prompts and responses to reflect your desired output, and use them as ground truth to evaluate the responses from another model.

    Screenshot of AI-generated evaluation data.

    Evaluation metrics

    Automated evaluation enables you to choose which evaluators you want to assess your model’s responses, and which metrics those evaluators should calculate. There are evaluators that help you measure:

    • AI Quality: The quality of your model’s responses is measured by using AI models to evaluate them for metrics like coherence and relevance and by using standard NLP metrics like F1 score, BLEU, METEOR, and ROUGE based on ground truth (in the form of expected response text)
    • Risk and safety: evaluators that assess the responses for content safety issues, including violence, hate, sexual content, and content related to self-harm.

    quote

  • Manually evaluate the performance of a model

    During the early phases of the development of your generative AI app, you want to experiment and iterate quickly. To easily assess whether your selected language model and app, created with prompt flow, meet your requirements, you can manually evaluate models and flows in the Azure AI Foundry portal.

    Even when your model and app are already in production, manual evaluations are a crucial part of assessing performance. As manual evaluations are done by humans, they can provide insights that automated metrics might miss.

    Let’s explore how you can manually evaluate your selected models and app in the Azure AI Foundry portal.

    Prepare your test prompts

    To begin the manual evaluation process, it’s essential to prepare a diverse set of test prompts that reflect the range of queries and tasks your app is expected to handle. These prompts should cover various scenarios, including common user questions, edge cases, and potential failure points. By doing so, you can comprehensively assess the app’s performance and identify areas for improvement.

    mobile app development

  • Assess the model performance

    Evaluating your model’s performance at different phases is crucial to ensure its effectiveness and reliability. Before exploring the various options you have to evaluate your model, let’s explore the aspects of your application you can evaluate.

    When you develop a generative AI app, you use a language model in your chat application to generate a response. To help you decide which model you want to integrate into your application, you can evaluate the performance of an individual language model:

    Diagram of an interaction with a language model.

    An input (1) is provided to a language model (2), and a response is generated as output (3). The model is then evaluated by analyzing the input, the output, and optionally comparing it to predefined expected output.

    managed it services

  • Use Video Analyzer widgets and APIs

    hile you can perform all video analysis tasks in the Azure Video Indexer portal, you may want to incorporate the service into custom applications. There are two ways you can accomplish this.

    Azure Video Indexer widgets

    The widgets used in the Azure Video Indexer portal to play, analyze, and edit videos can be embedded in your own custom HTML interfaces. You can use this technique to share insights from specific videos with others without giving them full access to your account in the Azure Video Indexer portal.

    Video Analyzer widgets in a custom web page

    Azure Video Indexer API

    Azure Video Indexer provides a REST API that you can use to obtain information about your account, including an access token.

    HTTPCopy

    https://api.videoindexer.ai/Auth/<location>/Accounts/<accountId>/AccessToken
    

    You can then use your token to consume the REST API and automate video indexing tasks, creating projects, retrieving insights, and creating or deleting custom models.

    For example, a GET call to https://api.videoindexer.ai/<location>/Accounts/<accountId>/Customization/CustomLogos/Logos/<logoId>?<accessToken> REST endpoint returns the specified logo. In another example, you can send a GET request to https://api.videoindexer.ai/<location>/Accounts/<accountId>/Videos?<accessToken>, which returns details of videos in your account, similar to the following JSON example:

    JSONCopy

    {
        "accountId": "SampleAccountId",
        "id": "30e66ec1b1",
        "partition": null,
        "externalId": null,
        "metadata": null,
        "name": "test3",
        "description": null,
        "created": "2018-04-25T16=50=00.967+00=00",
        "lastModified": "2018-04-25T16=58=13.409+00=00",
        "lastIndexed": "2018-04-25T16=50=12.991+00=00",
        "privacyMode": "Private",
        "userName": "SampleUserName",
        "isOwned": true,
        "isBase": true,
        "state": "Processing",
        "processingProgress": "",
        "durationInSeconds": 13,
        "thumbnailVideoId": "30e66ec1b1",
        "thumbnailId": "55848b7b-8be7-4285-893e-cdc366e09133",
        "social": {
            "likedByUser": false,
            "likes": 0,
            "views": 0
        },
        "searchMatches": [],
        "indexingPreset": "Default",
        "streamingPreset": "Default",
        "sourceLanguage": "en-US"
    }

    loyalty