Author: ultroni1

  • Visualize with AI from text to image

    With generative AI, visualizing ideas is more accessible than ever. One of the most exciting advancements is its ability to transform text descriptions into images. This technology, known as text-to-image generation, uses AI models to interpret text into visual representations. Imagine you’ve written a short story and want illustrations to bring it to life. By describing a scene through text, AI can create illustrations that match your description in no time, helping you find inspiration quickly or even critically analyze your story visually to see if you want to make any changes to enhance it further.

    This technology isn’t limited to authors. Designers, marketers, educators, and anyone with a creative vision can benefit from it. For instance, a teacher could describe a historical event and have AI generate an image to make the lesson more engaging for students. A marketer could visualize a campaign concept before it’s executed. The possibilities are endless.

    This video will introduce you to text-to-image technology and how you can access it today in Microsoft Copilot. By leveraging tools like DALL·E, integrated into Microsoft Designer, you can turn your textual descriptions into vivid images, bridging the gap between imagination and reality. Whether you’re looking to enhance a story, create visuals for a project, or simply explore your creativity, text-to-image AI is a powerful tool at your disposal.

    mysql database training coursesm malaysia

  • AI linguistics

    You might come across some interesting AI acronyms such as Large Language Models (LLMs) or Natural Language Generation (NLG). These acronyms are a part of a branch of AI called Natural Language Processing (NLP).

    These technologies enable computers to understand, generate, and respond to human language in new ways. From suggesting words while typing a message, to giving ideas for a creative project, AI provides further support for our creative endeavors.

    red hat certified specialist in deployment and systems management malaysia

  • What is generative AI?

    Generative AI is transforming the approach to productivity. Recent advancements in AI greatly improved natural language processing and generation. These technologies now enable the creation of images, videos, texts, and audio from simple descriptions, transforming how you interact with technology.

    This video reveals how generative AI can provide support in the creative process and enhance productivity, highlighting its potential to reshape various industries by empowering people to achieve more with less. By automating repetitive tasks and providing creative suggestions, generative AI allows you to focus on what truly matters: envisioning innovative ideas, setting ambitious goals, and pursuing your dreams.

    Now that you have the foundations down, you might be curious about how it works. Generative AI functions through AI models, which are mathematical structures that learn from patterns in data using algorithms. There are several types of AI models with varying capabilities. Some AI models are designed to identify and classify information, while others, like generative AI models, excel in creating content.

    This video shows how the use of generative AI models varies based on one’s technical ability. For example, experts can customize these models for complex tasks, while beginners can use preexisting models or tools with minimal technical knowledge.

    red hat certified specialist in red hat enterprise linux diagnostics and troubleshooting malaysia

  • Implement RAG in a prompt flow

    After uploading data to Azure AI Foundry and creating an index on your data using the integration with Azure AI Search, you can implement the RAG pattern with Prompt Flow to build a generative AI application.

    Prompt Flow is a development framework for defining flows that orchestrate interactions with an LLM.

    Diagram of a prompt flow.

    A flow begins with one or more inputs, usually a question or prompt entered by a user, and in the case of iterative conversations the chat history to this point.

    The flow is then defined as a series of connected tools, each of which performs a specific operation on the inputs and other environmental variables. There are multiple types of tool that you can include in a prompt flow to perform tasks such as:

    • Running custom Python code
    • Looking up data values in an index
    • Creating prompt variants – enabling you to define multiple versions of a prompt for a large language model (LLM), varying system messages or prompt wording, and compare and evaluate the results from each variant.
    • Submitting a prompt to an LLM to generate results.

    Finally, the flow has one or more outputs, typically to return the generated results from an LLM.

    red hat certified specialist in linux performance tuning malaysia

  • Create a RAG-based client application

    When you’ve created an Azure AI Search index for your contextual data, you can use it with an OpenAI model. To ground prompts with data from your index, the Azure OpenAI SDK supports extending the request with connection details for the index.

    The following Python code example shows how to implement this pattern.

    PythonCopy

    from openai import AzureOpenAI
    
    # Get an Azure OpenAI chat client
    chat_client = AzureOpenAI(
        api_version = "2024-12-01-preview",
        azure_endpoint = open_ai_endpoint,
        api_key = open_ai_key
    )
    
    # Initialize prompt with system message
    prompt = [
        {"role": "system", "content": "You are a helpful AI assistant."}
    ]
    
    # Add a user input message to the prompt
    input_text = input("Enter a question: ")
    prompt.append({"role": "user", "content": input_text})
    
    # Additional parameters to apply RAG pattern using the AI Search index
    rag_params = {
        "data_sources": [
            {
                "type": "azure_search",
                "parameters": {
                    "endpoint": search_url,
                    "index_name": "index_name",
                    "authentication": {
                        "type": "api_key",
                        "key": search_key,
                    }
                }
            }
        ],
    }
    
    # Submit the prompt with the index information
    response = chat_client.chat.completions.create(
        model="<model_deployment_name>",
        messages=prompt,
        extra_body=rag_params
    )
    
    # Print the contextualized response
    completion = response.choices[0].message.content
    print(completion)
    

    In this example, the search against the index is keyword-based – in other words, the query consists of the text in the user prompt, which is matched to text in the indexed documents. When using an index that supports it, an alternative approach is to use a vector-based query in which the index and the query use numeric vectors to represent text tokens. Searching with vectors enables matching based on semantic similarity as well as literal text matches.

    To use a vector-based query, you can modify the specification of the Azure AI Search data source details to include an embedding model; which is then used to vectorize the query text.

    PythonCopy

    rag_params = {
        "data_sources": [
            {
                "type": "azure_search",
                "parameters": {
                    "endpoint": search_url,
                    "index_name": "index_name",
                    "authentication": {
                        "type": "api_key",
                        "key": search_key,
                    },
                    # Params for vector-based query
                    "query_type": "vector",
                    "embedding_dependency": {
                        "type": "deployment_name",
                        "deployment_name": "<embedding_model_deployment_name>",
                    },
                }
            }
        ],
    }

    red hat certified specialist in server hardening malaysia

  • Make your data searchable

    When you want to create an agent that uses your own data to generate accurate answers, you need to be able to search your data efficiently. When you build an agent with the Azure AI Foundry, you can use the integration with Azure AI Search to retrieve the relevant context in your chat flow.

    Azure AI Search is a retriever that you can include when building a language model application with prompt flow. Azure AI Search allows you to bring your own data, index your data, and query the index to retrieve any information you need.

    Diagram showing an index being queried to retrieve grounding data.

    Using a vector index

    While a text-based index will improve search efficiency, you can usually achieve a better data retrieval solution by using a vector-based index that contains embeddings that represent the text tokens in your data source.

    An embedding is a special format of data representation that a search engine can use to easily find the relevant information. More specifically, an embedding is a vector of floating-point numbers.

    For example, imagine you have two documents with the following contents:

    • “The children played joyfully in the park.”
    • “Kids happily ran around the playground.”

    These two documents contain texts that are semantically related, even though different words are used. By creating vector embeddings for the text in the documents, the relation between the words in the text can be mathematically calculated.

    red hat enterprise linux rhel training courses malaysia

  • Understand how to ground your language model

    Language models excel in generating engaging text, and are ideal as the base for agents. Agents provide users with an intuitive chat-based application to receive assistance in their work. When designing an agent for a specific use case, you want to ensure your language model is grounded and uses factual information that is relevant to what the user needs.

    Though language models are trained on a vast amount of data, they may not have access to the knowledge you want to make available to your users. To ensure that an agent is grounded on specific data to provide accurate and domain-specific responses, you can use Retrieval Augmented Generation (RAG).

    Understanding RAG

    RAG is a technique that you can use to ground a language model. In other words, it’s a process for retrieving information that is relevant to the user’s initial prompt. In general terms, the RAG pattern incorporates the following steps:

    Diagram of the retrieval augmented generation pattern.
    1. Retrieve grounding data based on the initial user-entered prompt.
    2. Augment the prompt with grounding data.
    3. Use a language model to generate a grounded response.

    By retrieving context from a specified data source, you ensure that the language model uses relevant information when responding, instead of relying on its training data.

    Using RAG is a powerful and easy-to-use technique for many cases in which you want to ground your language model and improve the factual accuracy of your generative AI app’s responses.

    red hat linux administration training courses malaysia

  • Set up, configure, and troubleshoot GitHub Copilot

    This unit explains how to sign up for GitHub Copilot, how to configure GitHub Copilot by using VS Code, and how to troubleshoot GitHub Copilot by using VS Code.

    Sign up for GitHub Copilot

    Before you can start using GitHub Copilot, you need to set up a free trial or subscription for your account.

    To get started, select your GitHub profile photo, and then select Settings. Copilot is on the left menu under Code, planning, and automation.

    After you sign up, you need to install an extension for your preferred environment. GitHub Copilot supports GitHub.com (which doesn’t need an extension), VS Code, Visual Studio, JetBrains IDEs, and Neovim as an unobtrusive extension.

    For this module, you’ll just review extensions and configurations for VS Code. The exercise that you’ll complete in the next unit uses VS Code.

    If you’re using a different environment, you can find specific links to set up other environments in the “References” section at the end of this module.

    red hat linux certification malaysia

  • Interact with Copilot

    This unit explores ways that you can maximize your interaction with GitHub Copilot in your development environment. By understanding the service’s features and capabilities, you learn how to use it effectively.

    The following sections describe the various ways to trigger and use GitHub Copilot, along with examples and shortcuts to help you get the most out of it.

    Inline suggestions

    Inline suggestions are the most immediate form of assistance in Copilot. As you type, Copilot analyzes your code and context to offer real-time code completions. This feature predicts what you might want to write next and displays suggestions in a subtle, unobtrusive way.

    The suggestions that Copilot offers appear as grayed-out text ahead of your cursor.

    • To accept a suggestion, select the Tab key or the > (right arrow) key.
    • To reject a suggestion, keep typing or select the Esc key.

    Inline suggestions are especially useful when you’re working on repetitive tasks or you need quick boilerplate code.

    Here’s an example:

    PythonCopy

    def calculate_average(numbers):
        # Start typing here and watch Copilot suggest the function body

    sap erp procurement material management training courses malaysia

  • GitHub Copilot, your AI pair programmer

    It’s no secret that AI is disrupting the technology industry. AI is shaping how development teams work and build software. These advancements in AI can enhance the productivity of developers around the world.

    The addition of AI features to the developer tools that you use and love helps you collaborate, develop, test, and ship your products faster and more efficiently than ever before. GitHub Copilot is a service that provides you with an AI pair programmer that works with all of the popular programming languages.

    In recent research, GitHub and Microsoft found that developers experience a significant productivity boost when they use GitHub Copilot to work on real-world projects and tasks. In fact, in the three years since its launch, developers have experienced the following benefits while using GitHub Copilot:

    • 46% of new code now written by AI
    • 55% faster overall developer productivity
    • 74% of developers feeling more focused on satisfying work

    Microsoft developed GitHub Copilot in collaboration with OpenAI. GitHub Copilot is powered by the OpenAI Codex system. OpenAI Codex has broad knowledge of how people use code and is more capable than GPT-3 in code generation. OpenAI Codex is more capable, in part, because it was trained on a dataset that included a larger concentration of public source code.

    GitHub Copilot is available as an extension for VS Code, Visual Studio, Vim/Neovim, and the JetBrains suite of IDEs.

    sap erp pp production planning training courses malaysia