Author: ultroni1

  • What is Bash?

    Bash is a vital tool for managing Linux machines. The name is short for “Bourne Again Shell.”

    A shell is a program that commands the operating system to perform actions. You can enter commands in a console on your computer and run the commands directly, or you can use scripts to run batches of commands. Shells like PowerShell and Bash give system administrators the power and precision they need for fine-tuned control of the computers they’re responsible for.

    There are other Linux shells, including csh and zsh, but Bash became the de facto Linux standard. That’s because Bash is compatible with Unix’s first serious shell, the Bourne shell, also known as sh. Bash incorporates the best features of its predecessors. But Bash also has some fine features of its own, including built-in commands and the ability to invoke external programs.

    One reason for Bash’s success is its simplicity. Bash, like the rest of Linux, is based on the Unix design philosophy. As Peter Salus summarized in his book A Quarter Century of Unix, three of the “big ideas” embodied in Unix are:

    • Programs do one thing and do it well
    • Programs work together
    • Programs use text streams as the universal interface

    The last part is key to understanding how Bash works. In Unix and Linux, everything is a file. That means you can use the same commands without worrying about whether the I/O stream—the input and output—comes from a keyboard, a disk file, a socket, a pipe, or another I/O abstraction.

    sap erp pp production planning training courses malaysia

  • Take action with Microsoft Fabric Activator

    When monitoring surfaces changing data, anomalies, or critical events, alerts are generated or actions are triggered. Real-time data analytics is commonly based on the ingestion and processing of a data stream that consists of a perpetual series of data, typically related to specific point-in-time events. For example, a stream of data from an environmental IoT weather sensor. Real-Time Intelligence in Fabric contains a tool called Activator that can be used to trigger actions on streaming data. For example, a stream of data from an environmental IoT weather sensor might be used to trigger emails to sailors when wind thresholds are met. When certain conditions or logic is met, an action is taken, like alerting users, executing Fabric job items like a pipeline, or kicking off Power Automate workflows. The logic can be either a defined threshold, a pattern like events happening repeatedly over a time period, or the results of logic defined by a Kusto Query Language (KQL) query.

    sap fico financial accounting training courses malaysia

  • Use Microsoft Fabric Monitor Hub

    Visualization tools make monitoring easier. They help you identify trends or anomalies. Monitor hub is the monitoring visualization tool in Microsoft Fabric. Monitor hub collects and aggregates data from selected Fabric items and processes. It stores Fabric activity data in a common interface so you can view the status of multiple different data integration, transformation, movement, and analysis activities in Fabric in one place, rather than monitor each separately.

    Activities displayed in the Monitor hub

    Some of the activities you can see monitoring metadata for in the Microsoft Fabric Monitor hub include:

    • Data pipeline execution history
    • Dataflow executions
    • Datamart and semantic model refreshes
    • Spark job and notebook execution history and job details

    sap s 4 hana training courses malaysia

  • Understand monitoring

    Monitoring is the process of collecting system data and metrics that determine if a system is healthy and operating as expected. Monitoring exposes errors that occurred and when they happened. To investigate issues and remediate errors, historical data is analyzed to get a picture of the health of a system or process.

    Monitoring Fabric activities

    In Fabric, you schedule activities and jobs that perform tasks like data movement, and transformation. Activities have dependencies on one another. You need to make sure that data arrives in its expected location on time and that system errors or delays don’t affect users or downstream activities. End-to-end processes need to be managed to ensure they’re reliable, performant, and resilient. One aspect of this monitoring is identifying and handling long-running operations and errors effectively. By doing this, you can minimize downtime and quickly address any underlying issues.

    The following activities in Fabric allow you to perform tasks that deliver data to users. These activities should be monitored:

    • Data pipeline activity – A data pipeline is a group of activities that together perform a data ingestion task. Pipelines allow you to manage, extract, transform, and load (ETL) activities together instead of individually. Monitor the success or failure of jobs and pipeline activities. Look for errors if the pipeline failed. View job history to compare current activity performance to past job execution performance to gain insight into when errors were first introduced into a process.
    • Dataflows – A dataflow is a tool for ingesting, loading, and transforming data using a low-code interface. Dataflows can be run manually or scheduled or run as part of pipeline orchestration. Monitor start and end times, status, duration, and table load activities. To investigate issues, drill down into activities and view information about errors.
    • Semantic model refreshes – A semantic model is a visual representation of a data model that’s ready for reporting and visualization. It contains transformations, calculations, and data relationships. Changes to the data model require the semantic model to be refreshed. Semantic models can be refreshed from data pipelines using the semantic model refresh activity. Monitor for refresh retries to help identify transient issues, before classifying an issue as a failure.
    • Spark jobs, notebooks and lakehouses – Notebooks are an interface for developing Apache Spark jobs. Data can be loaded, or transformed for lakehouses using Spark and notebooks. Monitor Spark job progress, task execution, resource usage, and review Spark logs.
    • Microsoft Fabric Eventstreams – Events are observations about the state of an object, like a timestamp for weather sensors. Eventstreams in Fabric are set up to run perpetually to ingest real-time or streaming events into Fabric and transform them for analytics needs, and then route them to various destinations. Monitor streaming event data, ingestion status, and ingestion performance.

    sap supplychain management scm training courses malaysia

  • Describe Software as a Service

    Software as a service (SaaS) is the most complete cloud service model from a product perspective. With SaaS, you’re essentially renting or using a fully developed application. Email, financial software, messaging applications, and connectivity software are all common examples of a SaaS implementation.

    While the SaaS model may be the least flexible, it’s also the easiest to get up and running. It requires the least amount of technical knowledge or expertise to fully employ.

    Shared responsibility model

    The shared responsibility model applies to all the cloud service types. SaaS is the model that places the most responsibility with the cloud provider and the least responsibility with the user. In a SaaS environment you’re responsible for the data that you put into the system, the devices that you allow to connect to the system, and the users that have access. Nearly everything else falls to the cloud provider. The cloud provider is responsible for physical security of the datacenters, power, network connectivity, and application development and patching.

    sap wm warehouse management training courses malaysia

  • Describe Platform as a Service

    Platform as a service (PaaS) is a middle ground between renting space in a datacenter (infrastructure as a service) and paying for a complete and deployed solution (software as a service). In a PaaS environment, the cloud provider maintains the physical infrastructure, physical security, and connection to the internet. They also maintain the operating systems, middleware, development tools, and business intelligence services that make up a cloud solution. In a PaaS scenario, you don’t have to worry about the licensing or patching for operating systems and databases.

    PaaS is well suited to provide a complete development environment without the headache of maintaining all the development infrastructure.

    Shared responsibility model

    The shared responsibility model applies to all the cloud service types. PaaS splits the responsibility between you and the cloud provider. The cloud provider is responsible for maintaining the physical infrastructure and its access to the internet, just like in IaaS. In the PaaS model, the cloud provider will also maintain the operating systems, databases, and development tools. Think of PaaS like using a domain joined machine: IT maintains the device with regular updates, patches, and refreshes.

    Depending on the configuration, you or the cloud provider may be responsible for networking settings and connectivity within your cloud environment, network and application security, and the directory infrastructure.

    microsoft sharepoint certification training courses malaysia

  • Describe Infrastructure as a Service

    Infrastructure as a service (IaaS) is the most flexible category of cloud services, as it provides you the maximum amount of control for your cloud resources. In an IaaS model, the cloud provider is responsible for maintaining the hardware, network connectivity (to the internet), and physical security. You’re responsible for everything else: operating system installation, configuration, and maintenance; network configuration; database and storage configuration; and so on. With IaaS, you’re essentially renting the hardware in a cloud datacenter, but what you do with that hardware is up to you.

    Shared responsibility model

    The shared responsibility model applies to all the cloud service types. IaaS places the largest share of responsibility with you. The cloud provider is responsible for maintaining the physical infrastructure and its access to the internet. You’re responsible for installation and configuration, patching and updates, and security.

    microsoft sql server certification training courses malaysia

  • ASP.NET Core Web API Controllers

    In the previous exercise, you created a web application that provides sample weather forecast data, then interacted with it in the HTTP Read-Eval-Print Loop (REPL).

    Before you dive in to writing your own PizzaController class, let’s look at the code in the WeatherController sample to understand how it works. In this unit, you learn how WeatherController uses the ControllerBase base class and a few .NET attributes to build a functional web API in a few dozen lines of code. After you understand those concepts, you’re ready to write your own PizzaController class.

    The base class: ControllerBase

    A controller is a public class with one or more public methods known as actions. By convention, a controller is placed in the project root’s Controllers directory. The actions are exposed as HTTP endpoints via routing. So an HTTP GET request to https://localhost:{PORT}/weatherforecast causes the Get() method of the WeatherForecastController class to be executed.

    The first thing to notice is that this class inherits from the ControllerBase base class. This base class provides much standard functionality for handling HTTP requests, so you can focus on the specific business logic for your application.

    vmware certification training courses malaysia

  • REST in ASP.NET Core

    When you browse to a webpage, the web server communicates with your browser by using HTML, CSS, and JavaScript. For example, If you interact with the page by submitting a sign-in form or selecting a buy button, the browser sends the information back to the web server.

    In a similar way, web servers can communicate with a broad range of clients (browsers, mobile devices, other web servers, and more) by using web services. API clients communicate with the server over HTTP, and the two exchange information by using a data format such as JSON or XML. APIs are often used in single-page applications (SPAs) that perform most of the user-interface logic in a web browser. Communication with the web server primarily happens through web APIs.

    REST: A common pattern for building APIs with HTTP

    Representational State Transfer (REST) is an architectural style for building web services. REST requests are made over HTTP. They use the same HTTP verbs that web browsers use to retrieve webpages and send data to servers. The verbs are:

    • GET: Retrieve data from the web service.
    • POST: Create a new item of data on the web service.
    • PUT: Update an item of data on the web service.
    • PATCH: Update an item of data on the web service by describing a set of instructions about how the item should be modified. The sample application in this module doesn’t use this verb.
    • DELETE: Delete an item of data on the web service.

    Web service APIs that adhere to REST are called RESTful APIs. They’re defined through:

    • A base URI.
    • HTTP methods, such as GETPOSTPUTPATCH, or DELETE.
    • A media type for the data, such as JavaScript Object Notation (JSON) or XML.

    An API often needs to provide services for a few different but related things. For example, our pizza API might manage pizzas, customers, and orders. We use routing to map URIs (uniform resource identifiers) to logical divisions in our code, so that requests to https://localhost:5000/pizza are routed to PizzaController and requests to https://localhost:5000/order are routed to OrderController.

    microsoft windows server certification training courses malaysia

  • AI as an ally for humanitarian action

    As our world undergoes rapid transformations and faces numerous complex challenges, the need for effective humanitarian action has never been more urgent. AI is at the forefront, offering solutions to aid people in need.

    Recent AI advancements can help tackle some of the most pressing global challenges. As AI progresses, our capacity to address significant climate and humanitarian challenges enhances.

    agile project management certification training courses malaysia