diff --git a/.amlignore b/.amlignore new file mode 100644 index 0000000..0fa594b --- /dev/null +++ b/.amlignore @@ -0,0 +1,6 @@ +## This file was auto generated by the Azure Machine Learning Studio. Please do not remove. +## Read more about the .amlignore file here: https://docs.microsoft.com/azure/machine-learning/how-to-save-write-experiment-files#storage-limits-of-experiment-snapshots + +.ipynb_aml_checkpoints/ +*.amltmp +*.amltemp \ No newline at end of file diff --git a/.amlignore.amltmp b/.amlignore.amltmp new file mode 100644 index 0000000..0fa594b --- /dev/null +++ b/.amlignore.amltmp @@ -0,0 +1,6 @@ +## This file was auto generated by the Azure Machine Learning Studio. Please do not remove. +## Read more about the .amlignore file here: https://docs.microsoft.com/azure/machine-learning/how-to-save-write-experiment-files#storage-limits-of-experiment-snapshots + +.ipynb_aml_checkpoints/ +*.amltmp +*.amltemp \ No newline at end of file diff --git a/1-Training/AzureServiceClassifier_Training.ipynb b/1-Training/AzureServiceClassifier_Training.ipynb index f68b756..54cd15c 100644 --- a/1-Training/AzureServiceClassifier_Training.ipynb +++ b/1-Training/AzureServiceClassifier_Training.ipynb @@ -51,7 +51,7 @@ "source": [ "### Check Azure Machine Learning Python SDK version\n", "\n", - "This tutorial requires version 1.0.69 or higher. Let's check the version of the SDK:" + "This tutorial requires version 1.27.0 or higher. Let's check the version of the SDK:" ] }, { @@ -197,10 +197,10 @@ "from azureml.core import Workspace\n", "\n", "ws = Workspace.from_config()\n", - "print('Workspace name: ' + workspace.name, \n", - " 'Azure region: ' + workspace.location, \n", - " 'Subscription id: ' + workspace.subscription_id, \n", - " 'Resource group: ' + workspace.resource_group, sep = '\\n')" + "print('Workspace name: ' + ws.name, \n", + " 'Azure region: ' + ws.location, \n", + " 'Subscription id: ' + ws.subscription_id, \n", + " 'Resource group: ' + ws.resource_group, sep = '\\n')" ] }, { @@ -216,23 +216,30 @@ "source": [ "A [Datastore](https://docs.microsoft.com/en-us/python/api/azureml-core/azureml.core.datastore.datastore?view=azure-ml-py) is used to store connection information to a central data storage. This allows you to access your storage without having to hard code this (potentially confidential) information into your scripts. \n", "\n", - "In this tutorial, the data was been previously prepped and uploaded into a central [Blob Storage](https://azure.microsoft.com/en-us/services/storage/blobs/) container. We will register this container into our workspace as a datastore using a [shared access signature (SAS) token](https://docs.microsoft.com/en-us/azure/storage/common/storage-sas-overview). " + "In this tutorial, the data was been previously prepped and uploaded into a central [Blob Storage](https://azure.microsoft.com/en-us/services/storage/blobs/) container. We will register this container into our workspace as a datastore using a [shared access signature (SAS) token](https://docs.microsoft.com/en-us/azure/storage/common/storage-sas-overview). \n", + "\n", + "\n", + "Here is files.\n", + "![](https://github.com/xlegend1024/bert-stack-overflow/raw/master/images/datastore_folder_files.png)" ] }, { "cell_type": "code", "execution_count": null, "metadata": { + "gather": { + "logged": 1622006615321 + }, "scrolled": true }, "outputs": [], "source": [ "from azureml.core import Datastore, Dataset\n", "\n", - "datastore_name = 'tfworld'\n", + "datastore_name = 'mtcseattle'\n", "container_name = 'azure-service-classifier'\n", - "account_name = 'johndatasets'\n", - "sas_token = '?sv=2019-02-02&ss=bfqt&srt=sco&sp=rl&se=2021-06-02T03:40:25Z&st=2020-03-09T19:40:25Z&spr=https&sig=bUwK7AJUj2c%2Fr90Qf8O1sojF0w6wRFgL2c9zMVCWNPA%3D'\n", + "account_name = 'mtcseattle'\n", + "sas_token = '?sv=2020-04-08&st=2021-05-26T04%3A39%3A46Z&se=2022-05-27T04%3A39%3A00Z&sr=c&sp=rl&sig=CTFMEu24bo2X06G%2B%2F2aKiiPZBzvlWHELe15rNFqULUk%3D'\n", "\n", "datastore = Datastore.register_azure_blob_container(workspace=ws, \n", " datastore_name=datastore_name, \n", @@ -253,11 +260,16 @@ "cell_type": "code", "execution_count": null, "metadata": { + "gather": { + "logged": 1622006619321 + }, "scrolled": true }, "outputs": [], "source": [ - "datastore = ws.datastores['tfworld']\n", + "from azureml.core import Datastore\n", + "\n", + "datastore = Datastore.get(ws, 'mtcseattle')\n", "datastore" ] }, @@ -265,38 +277,22 @@ "cell_type": "markdown", "metadata": {}, "source": [ - "#### What if my data wasn't already hosted remotely?\n", - "All workspaces also come with a blob container which is registered as a default datastore. This allows you to easily upload your own data to a remote storage location. You can access this datastore and upload files as follows:\n", - "```\n", - "datastore = workspace.get_default_datastore()\n", - "ds.upload(src_dir='', target_path='')\n", - "```\n" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "## Register Dataset\n", + "## Dataset\n", "\n", "Azure Machine Learning service supports first class notion of a Dataset. A [Dataset](https://docs.microsoft.com/en-us/python/api/azureml-core/azureml.core.dataset.dataset?view=azure-ml-py) is a resource for exploring, transforming and managing data in Azure Machine Learning. The following Dataset types are supported:\n", "\n", "* [TabularDataset](https://docs.microsoft.com/en-us/python/api/azureml-core/azureml.data.tabulardataset?view=azure-ml-py) represents data in a tabular format created by parsing the provided file or list of files.\n", "\n", - "* [FileDataset](https://docs.microsoft.com/en-us/python/api/azureml-core/azureml.data.filedataset?view=azure-ml-py) references single or multiple files in datastores or from public URLs.\n", - "\n", - "First, we will use visual tools in Azure ML studio to register and explore our dataset as Tabular Dataset.\n", - "\n", - "* **ACTION**: Follow [create-dataset](images/create-dataset.ipynb) guide to create Tabular Dataset from our training data." + "* [FileDataset](https://docs.microsoft.com/en-us/python/api/azureml-core/azureml.data.filedataset?view=azure-ml-py) references single or multiple files in datastores or from public URLs." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ - "## Register Dataset using SDK\n", + "### Register Dataset using SDK\n", "\n", - "In addition to UI we can register datasets using SDK. In this workshop we will register second type of Datasets using code - File Dataset. File Dataset allows specific folder in our datastore that contains our data files to be registered as a Dataset.\n", + "We will register datasets using SDK. In this workshop we will register second type of Datasets using code - File Dataset. File Dataset allows specific folder in our datastore that contains our data files to be registered as a Dataset.\n", "\n", "There is a folder within our datastore called **data** that contains all our training and testing data. We will register this as a dataset." ] @@ -340,94 +336,9 @@ "cell_type": "markdown", "metadata": {}, "source": [ - "## Explore Training Code" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "In this workshop the training code is provided in [train.py](./train.py) and [model.py](./model.py) files. The model is based on popular [huggingface/transformers](https://github.com/huggingface/transformers) libary. Transformers library provides performant implementation of BERT model with high level and easy to use APIs based on Tensorflow 2.0.\n", - "\n", - "![](https://raw.githubusercontent.com/huggingface/transformers/master/docs/source/imgs/transformers_logo_name.png)\n", - "\n", - "* **ACTION**: Explore _train.py_ and _model.py_ using [Azure ML studio > Notebooks tab](images/azuremlstudio-notebooks-explore.png)\n", - "* NOTE: You can also explore the files using Jupyter or Jupyter Lab UI." - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "## Test Locally\n", - "\n", - "Let's try running the script locally to make sure it works before scaling up to use our compute cluster. To do so, you will need to install the transformers libary." - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "%pip install transformers==2.0.0" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "We have taken a small partition of the dataset and included it in this repository. Let's take a quick look at the format of the data." - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "data_dir = './data'" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "import os \n", - "import pandas as pd\n", - "data = pd.read_csv(os.path.join(data_dir, 'train.csv'), header=None)\n", - "data.head(5)" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "Now we know what the data looks like, let's test out our script!\n", + "## Perform Experiment using Azure Machine Learning for SDK\n", "\n", - "Note: --steps_per_epoch 5 --num_epochs 1" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "%%time\n", - "import sys\n", - "!{sys.executable} train.py --data_dir {data_dir} --max_seq_length 128 --batch_size 16 --learning_rate 3e-5 --steps_per_epoch 5 --num_epochs 1 --export_dir ../outputs/model" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "## Perform Experiment\n", - "\n", - "Now that we have our compute target, dataset, and training script working locally, it is time to scale up so that the script can run faster. We will start by creating an [experiment](https://docs.microsoft.com/en-us/python/api/azureml-core/azureml.core.experiment.experiment?view=azure-ml-py). An experiment is a grouping of many runs from a specified script. All runs in this tutorial will be performed under the same experiment. " + "Now that we have Compute Instance, dataset, and training script working locally, it is time to train a model running the script. We will start by creating an [experiment](https://docs.microsoft.com/en-us/python/api/azureml-core/azureml.core.experiment.experiment?view=azure-ml-py). An experiment is a grouping of many runs from a specified script. All runs in this tutorial will be performed under the same experiment. " ] }, { @@ -441,31 +352,24 @@ "from azureml.core import Experiment\n", "\n", "experiment_name = 'azure-service-classifier' \n", - "experiment = Experiment(workspace, name=experiment_name)" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "#### Create TensorFlow Estimator\n", - "\n", - "The Azure Machine Learning Python SDK Estimator classes allow you to easily construct run configurations for your experiments. They allow you too define parameters such as the training script to run, the compute target to run it on, framework versions, additional package requirements, etc. \n", - "\n", - "You can also use a generic [Estimator](https://docs.microsoft.com/en-us/python/api/azureml-train-core/azureml.train.estimator.estimator?view=azure-ml-py) to submit training scripts that use any learning framework you choose.\n", - "\n", - "For popular libaries like PyTorch and Tensorflow you can use their framework specific estimators. We will use the [TensorFlow Estimator](https://docs.microsoft.com/en-us/python/api/azureml-train-core/azureml.train.dnn.tensorflow?view=azure-ml-py) for our experiment." + "experiment = Experiment(ws, name=experiment_name)" ] }, { "cell_type": "code", "execution_count": null, - "metadata": {}, + "metadata": { + "gather": { + "logged": 1622006443581 + } + }, "outputs": [], "source": [ "from azureml.core import Environment\n", + "from azureml.core.conda_dependencies import CondaDependencies \n", "\n", - "env = Environment.get(workspace, name='AzureML-TensorFlow-2.0-GPU')\n", + "env = Environment.get(ws, name='AzureML-TensorFlow-2.0-GPU')\n", + "env.python.conda_dependencies.add_conda_package(\"pip\")\n", "env.python.conda_dependencies.add_pip_package(\"transformers==2.0.0\")\n", "env.python.conda_dependencies.add_pip_package(\"absl-py\")\n", "env.python.conda_dependencies.add_pip_package(\"azureml-dataprep\")\n", @@ -476,6 +380,17 @@ "env" ] }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### Create ScriptRunConfig\n", + "\n", + "The RunConfiguration object encapsulates the information necessary to submit a training run in an experiment. Typically, you will not create a RunConfiguration object directly but get one from a method that returns it, such as the submit method of the Experiment class.\n", + "\n", + "RunConfiguration is a base environment configuration that is also used in other types of configuration steps that depend on what kind of run you are triggering. For example, when setting up a PythonScriptStep, you can access the step's RunConfiguration object and configure Conda dependencies or access the environment properties for the run." + ] + }, { "cell_type": "markdown", "metadata": {}, @@ -498,19 +413,14 @@ "\n", " 1) *azure_dataset.as_named_input('azureservicedata').as_mount()* mounts the dataset to the remote compute and provides the path to the dataset on our datastore. \n", " \n", - " 2) All outputs from the training script must be outputted to an './outputs' directory as this is the only directory that will be saved to the run. \n", - " \n", - " \n", - "- `framework_version`: This specifies the version of TensorFlow to use. Use Tensorflow.get_supported_verions() to see all supported versions.\n", - "- `use_gpu`: This will use the GPU on the compute target for training if set to True.\n", - "- `pip_packages`: This allows you to define any additional libraries to install before training." + " 2) All outputs from the training script must be outputted to an './outputs' directory as this is the only directory that will be saved to the run. " ] }, { "cell_type": "markdown", "metadata": {}, "source": [ - "#### 1) Add Metrics Logging\n", + "#### Add Metrics Logging\n", "\n", "So we were able to clone a Tensorflow 2.0 project and run it without any changes. However, with larger scale projects we would want to log some metrics in order to make it easier to monitor the performance of our model. \n", "\n", @@ -527,24 +437,26 @@ "run.log('val_accuracy', float(logs.get('val_accuracy')))\n", "run.log('accuracy', float(logs.get('accuracy')))\n", "```\n", - "We've created a *train_logging.py* script that includes logging metrics as shown above. \n", - "\n", - "* **ACTION**: Explore _train_logging.py_ using [Azure ML studio > Notebooks tab](images/azuremlstudio-notebooks-explore.png)" + "We've created a *train_logging.py* script that includes logging metrics as shown above. " ] }, { - "cell_type": "markdown", + "cell_type": "code", + "execution_count": null, "metadata": {}, + "outputs": [], "source": [ - "We can submit this run in the same way that we did before. \n", - "\n", - "*Since our cluster can scale automatically to two nodes, we can run this job simultaneously with the previous one.*" + "%pycat train_logging.py" ] }, { "cell_type": "code", "execution_count": null, - "metadata": {}, + "metadata": { + "gather": { + "logged": 1622006507878 + } + }, "outputs": [], "source": [ "from azureml.core import ScriptRun, ScriptRunConfig\n", @@ -553,10 +465,10 @@ " script='train_logging.py',\n", " arguments=['--data_dir', azure_dataset.as_named_input('azureservicedata').as_mount(),\n", " '--max_seq_length', 128,\n", - " '--batch_size', 32,\n", + " '--batch_size', 16,\n", " '--learning_rate', 3e-5,\n", " '--steps_per_epoch', 5, # to reduce time for workshop\n", - " '--num_epochs', 2, # to reduce time for workshop\n", + " '--num_epochs', 1, # to reduce time for workshop\n", " '--export_dir','./outputs/model'],\n", " compute_target=locals,\n", " environment=env)\n", @@ -575,6 +487,9 @@ "cell_type": "code", "execution_count": null, "metadata": { + "gather": { + "logged": 1622006698331 + }, "scrolled": true }, "outputs": [], @@ -612,7 +527,7 @@ "run.download_files(prefix='outputs/model')\n", "\n", "# If you haven't finished training the model then just download pre-made model from datastore\n", - "datastore.download('./',prefix=\"model\")" + "# datastore.download('./',prefix=\"model\")" ] }, { @@ -633,6 +548,7 @@ "from model import TFBertForMultiClassification\n", "from transformers import BertTokenizer\n", "import tensorflow as tf\n", + "\n", "def encode_example(text, max_seq_length):\n", " # Encode inputs using tokenizer\n", " inputs = tokenizer.encode_plus(\n", @@ -652,6 +568,7 @@ " return input_ids, attention_mask, token_type_ids\n", " \n", "labels = ['azure-web-app-service', 'azure-storage', 'azure-devops', 'azure-virtual-machine', 'azure-functions']\n", + "\n", "# Load model and tokenizer\n", "loaded_model = TFBertForMultiClassification.from_pretrained('model', num_labels=len(labels))\n", "tokenizer = BertTokenizer.from_pretrained('bert-base-cased')\n", @@ -721,6 +638,44 @@ "predict(\"How can virtual machine trigger devops pipeline\")" ] }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Register Model\n", + "\n", + "A registered [model](https://docs.microsoft.com/en-us/python/api/azureml-core/azureml.core.model(class)?view=azure-ml-py) is a reference to the directory or file that make up your model. After registering a model, you and other people in your workspace can easily gain access to and deploy your model without having to run the training script again. \n", + "\n", + "We need to define the following parameters to register a model:\n", + "\n", + "- `model_name`: The name for your model. If the model name already exists in the workspace, it will create a new version for the model.\n", + "- `model_path`: The path to where the model is stored. In our case, this was the *export_dir* defined in our estimators.\n", + "- `description`: A description for the model.\n", + "\n", + "Let's register the best run from our hyperparameter tuning." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "scrolled": true + }, + "outputs": [], + "source": [ + "model = run.register_model(model_name='azure-service-classifier', \n", + " model_path='./outputs/model',\n", + " datasets=[('train, test, validation data', azure_dataset)],\n", + " description='BERT model for classifying azure services on stackoverflow posts.')" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "---" + ] + }, { "cell_type": "markdown", "metadata": {}, @@ -747,6 +702,33 @@ "* **ACTION**: Explore _train_horovod.py_ using [Azure ML studio > Notebooks tab](images/azuremlstudio-notebooks-explore.png)" ] }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "%pycat train_horovod.py" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "from azureml.core.compute import AmlCompute, ComputeTarget\n", + "\n", + "cluster_name = 'train-gpu-nv6'\n", + "compute_config = AmlCompute.provisioning_configuration(vm_size='Standard_NV6', \n", + " idle_seconds_before_scaledown=6000,\n", + " min_nodes=0, \n", + " max_nodes=1)\n", + "\n", + "compute_target = ComputeTarget.create(ws, cluster_name, compute_config)\n", + "compute_target.wait_for_completion(show_output=True)" + ] + }, { "cell_type": "markdown", "metadata": {}, @@ -773,7 +755,7 @@ " '--num_epochs', 3,\n", " '--export_dir','./outputs/model'],\n", " compute_target=compute_target,\n", - " distributed_job_config=MpiConfiguration(node_count=2),\n", + " distributed_job_config=MpiConfiguration(process_count_per_node=1, node_count=1),\n", " environment=env)\n", "\n", "run3 = experiment.submit(scriptrun3)" @@ -959,37 +941,6 @@ "best_run = run4.get_best_run_by_primary_metric()" ] }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "## Register Model\n", - "\n", - "A registered [model](https://docs.microsoft.com/en-us/python/api/azureml-core/azureml.core.model(class)?view=azure-ml-py) is a reference to the directory or file that make up your model. After registering a model, you and other people in your workspace can easily gain access to and deploy your model without having to run the training script again. \n", - "\n", - "We need to define the following parameters to register a model:\n", - "\n", - "- `model_name`: The name for your model. If the model name already exists in the workspace, it will create a new version for the model.\n", - "- `model_path`: The path to where the model is stored. In our case, this was the *export_dir* defined in our estimators.\n", - "- `description`: A description for the model.\n", - "\n", - "Let's register the best run from our hyperparameter tuning." - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": { - "scrolled": true - }, - "outputs": [], - "source": [ - "model = best_run.register_model(model_name='azure-service-classifier', \n", - " model_path='./outputs/model',\n", - " datasets=[('train, test, validation data', azure_dataset)],\n", - " description='BERT model for classifying azure services on stackoverflow posts.')" - ] - }, { "cell_type": "markdown", "metadata": {}, diff --git a/1-Training/azureserviceclassifier_training.ipynb.amltmp b/1-Training/azureserviceclassifier_training.ipynb.amltmp deleted file mode 100644 index b5c08f7..0000000 --- a/1-Training/azureserviceclassifier_training.ipynb.amltmp +++ /dev/null @@ -1,1206 +0,0 @@ -{ - "cells": [ - { - "cell_type": "markdown", - "source": [ - "Copyright (c) Microsoft Corporation. All rights reserved.\n", - "\n", - "Licensed under the MIT License." - ], - "metadata": {} - }, - { - "cell_type": "markdown", - "source": [ - "# Part 1: Training Tensorflow 2.0 Model on Azure Machine Learning Service\n", - "\n", - "## Overview of the part 1\n", - "This notebook is Part 1 (Preparing Data and Model Training) of a four part workshop that demonstrates an end-to-end workflow using Tensorflow 2.0 on Azure Machine Learning service. The different components of the workshop are as follows:\n", - "\n", - "- Part 1: [Preparing Data and Model Training](https://github.com/microsoft/bert-stack-overflow/blob/master/1-Training/AzureServiceClassifier_Training.ipynb)\n", - "- Part 2: [Inferencing and Deploying a Model](https://github.com/microsoft/bert-stack-overflow/blob/master/2-Inferencing/AzureServiceClassifier_Inferencing.ipynb)\n", - "- Part 3: [Setting Up a Pipeline Using MLOps](https://github.com/microsoft/bert-stack-overflow/tree/master/3-ML-Ops)\n", - "- Part 4: [Explaining Your Model Interpretability](https://github.com/microsoft/bert-stack-overflow/blob/master/4-Interpretibility/IBMEmployeeAttritionClassifier_Interpretability.ipynb)\n", - "\n", - "**This notebook will cover the following topics:**\n", - "\n", - "- Stackoverflow question tagging problem\n", - "- Introduction to Transformer and BERT deep learning models\n", - "- Introduction to Azure Machine Learning service\n", - "- Preparing raw data for training using Apache Spark\n", - "- Registering cleaned up training data as a Dataset\n", - "- Debugging the model in Tensorflow 2.0 Eager Mode\n", - "- Training the model on GPU cluster\n", - "- Monitoring training progress with built-in Tensorboard dashboard \n", - "- Automated search of best hyper-parameters of the model\n", - "- Registering the trained model for future deployment" - ], - "metadata": {} - }, - { - "cell_type": "markdown", - "source": [ - "## Prerequisites\n", - "This notebook is designed to be run in Azure ML Notebook VM. See [readme](https://github.com/microsoft/bert-stack-overflow/blob/master/README.md) file for instructions on how to create Notebook VM and open this notebook in it." - ], - "metadata": {} - }, - { - "cell_type": "markdown", - "source": [ - "### Check Azure Machine Learning Python SDK version\n", - "\n", - "This tutorial requires version 1.0.69 or higher. Let's check the version of the SDK:" - ], - "metadata": {} - }, - { - "cell_type": "code", - "source": [ - "import azureml.core\n", - "\n", - "print(\"Azure Machine Learning Python SDK version:\", azureml.core.VERSION)" - ], - "outputs": [], - "execution_count": null, - "metadata": { - "scrolled": true - } - }, - { - "cell_type": "markdown", - "source": [ - "## Stackoverflow Question Tagging Problem \n", - "In this workshop we will use powerful language understanding model to automatically route Stackoverflow questions to the appropriate support team on the example of Azure services.\n", - "\n", - "One of the key tasks to ensuring long term success of any Azure service is actively responding to related posts in online forums such as Stackoverflow. In order to keep track of these posts, Microsoft relies on the associated tags to direct questions to the appropriate support team. While Stackoverflow has different tags for each Azure service (azure-web-app-service, azure-virtual-machine-service, etc), people often use the generic **azure** tag. This makes it hard for specific teams to track down issues related to their product and as a result, many questions get left unanswered. \n", - "\n", - "**In order to solve this problem, we will build a model to classify posts on Stackoverflow with the appropriate Azure service tag.**\n", - "\n", - "We will be using a BERT (Bidirectional Encoder Representations from Transformers) model which was published by researchers at Google AI Reasearch. Unlike prior language representation models, BERT is designed to pre-train deep bidirectional representations from unlabeled text by jointly conditioning on both left and right context in all layers. As a result, the pre-trained BERT model can be fine-tuned with just one additional output layer to create state-of-the-art models for a wide range of natural language processing (NLP) tasks without substantial architecture modifications.\n", - "\n", - "## Why use BERT model?\n", - "[Introduction of BERT model](https://arxiv.org/pdf/1810.04805.pdf) changed the world of NLP. Many NLP problems that before relied on specialized models to achive state of the art performance are now solved with BERT better and with more generic approach.\n", - "\n", - "If we look at the leaderboards on such popular NLP problems as GLUE and SQUAD, most of the top models are based on BERT:\n", - "* [GLUE Benchmark Leaderboard](https://gluebenchmark.com/leaderboard/)\n", - "* [SQuAD Benchmark Leaderboard](https://rajpurkar.github.io/SQuAD-explorer/)\n", - "\n", - "Recently, Allen Institue for AI announced new language understanding system called Aristo [https://allenai.org/aristo/](https://allenai.org/aristo/). The system has been developed for 20 years, but it's performance was stuck at 60% on 8th grade science test. The result jumped to 90% once researchers adopted BERT as core language understanding component. With BERT Aristo now solves the test with A grade. " - ], - "metadata": {} - }, - { - "cell_type": "markdown", - "source": [ - "## Quick Overview of How BERT model works\n", - "\n", - "The foundation of BERT model is Transformer model, which was introduced in [Attention Is All You Need paper](https://arxiv.org/abs/1706.03762). Before that event the dominant way of processing language was Recurrent Neural Networks (RNNs). Let's start our overview with RNNs.\n", - "\n", - "## RNNs\n", - "\n", - "RNNs were powerful way of processing language due to their ability to memorize its previous state and perform sophisticated inference based on that.\n", - "\n", - "\"Drawing\"\n", - "\n", - "_Taken from [1](https://towardsdatascience.com/transformers-141e32e69591)_\n", - "\n", - "Applied to language translation task, the processing dynamics looked like this.\n", - "\n", - "![](https://miro.medium.com/max/1200/1*8GcdjBU5TAP36itWBcZ6iA.gif)\n", - "_Taken from [2](https://jalammar.github.io/visualizing-neural-machine-translation-mechanics-of-seq2seq-models-with-attention/)_\n", - " \n", - "But RNNs suffered from 2 disadvantes:\n", - "1. Sequential computation put a limit on parallelization, which limited effectiveness of larger models.\n", - "2. Long term relationships between words were harder to detect." - ], - "metadata": {} - }, - { - "cell_type": "markdown", - "source": [ - "## Transformers\n", - "\n", - "Transformers were designed to address these two limitations of RNNs.\n", - "\n", - "\"Drawing\"\n", - "\n", - "_Taken from [3](http://jalammar.github.io/illustrated-transformer/)_\n", - "\n", - "In each Encoder layer Transformer performs Self-Attention operation which detects relationships between all word embeddings in one matrix multiplication operation. \n", - "\n", - "\"Drawing\"\n", - "\n", - "_Taken from [4](https://towardsdatascience.com/deconstructing-bert-part-2-visualizing-the-inner-workings-of-attention-60a16d86b5c1)_\n" - ], - "metadata": {} - }, - { - "cell_type": "markdown", - "source": [ - "## BERT Model\n", - "\n", - "BERT is a very large network with multiple layers of Transformers (12 for BERT-base, and 24 for BERT-large). The model is first pre-trained on large corpus of text data (WikiPedia + books) using un-superwised training (predicting masked words in a sentence). During pre-training the model absorbs significant level of language understanding.\n", - "\n", - "\"Drawing\"\n", - "\n", - "_Taken from [5](http://jalammar.github.io/illustrated-bert/)_\n", - "\n", - "Pre-trained network then can easily be fine-tuned to solve specific language task, like answering questions, or categorizing spam emails.\n", - "\n", - "\"Drawing\"\n", - "\n", - "_Taken from [5](http://jalammar.github.io/illustrated-bert/)_\n", - "\n", - "The end-to-end training process of the stackoverflow question tagging model looks like this:\n", - "\n", - "![](images/model-training-e2e.png)\n" - ], - "metadata": {} - }, - { - "cell_type": "markdown", - "source": [ - "## What is Azure Machine Learning Service?\n", - "Azure Machine Learning service is a cloud service that you can use to develop and deploy machine learning models. Using Azure Machine Learning service, you can track your models as you build, train, deploy, and manage them, all at the broad scale that the cloud provides.\n", - "![](./images/aml-overview.png)\n", - "\n", - "\n", - "#### How can we use it for training machine learning models?\n", - "Training machine learning models, particularly deep neural networks, is often a time- and compute-intensive task. Once you've finished writing your training script and running on a small subset of data on your local machine, you will likely want to scale up your workload.\n", - "\n", - "To facilitate training, the Azure Machine Learning Python SDK provides a high-level abstraction, the estimator class, which allows users to easily train their models in the Azure ecosystem. You can create and use an Estimator object to submit any training code you want to run on remote compute, whether it's a single-node run or distributed training across a GPU cluster." - ], - "metadata": {} - }, - { - "cell_type": "markdown", - "source": [ - "## Connect To Workspace\n", - "\n", - "The [workspace](https://docs.microsoft.com/en-us/python/api/azureml-core/azureml.core.workspace(class)?view=azure-ml-py) is the top-level resource for Azure Machine Learning, providing a centralized place to work with all the artifacts you create when you use Azure Machine Learning. The workspace holds all your experiments, compute targets, models, datastores, etc.\n", - "\n", - "You can [open ml.azure.com](https://ml.azure.com) to access your workspace resources through a graphical user interface of **Azure Machine Learning studio**.\n", - "\n", - "![](./images/aml-workspace.png)\n", - "\n", - "**You will be asked to login in the next step. Use your Microsoft AAD credentials.**" - ], - "metadata": {} - }, - { - "cell_type": "code", - "source": [ - "from azureml.core import Workspace\n", - "\n", - "workspace = Workspace.from_config()\n", - "print('Workspace name: ' + workspace.name, \n", - " 'Azure region: ' + workspace.location, \n", - " 'Subscription id: ' + workspace.subscription_id, \n", - " 'Resource group: ' + workspace.resource_group, sep = '\\n')" - ], - "outputs": [], - "execution_count": null, - "metadata": {} - }, - { - "cell_type": "markdown", - "source": [ - "## Create Compute Target\n", - "\n", - "A [compute target](https://docs.microsoft.com/en-us/python/api/azureml-core/azureml.core.computetarget?view=azure-ml-py) is a designated compute resource/environment where you run your training script or host your service deployment. This location may be your local machine or a cloud-based compute resource. Compute targets can be reused across the workspace for different runs and experiments. \n", - "\n", - "For this tutorial, we will create an auto-scaling [Azure Machine Learning Compute](https://docs.microsoft.com/en-us/python/api/azureml-core/azureml.core.compute.amlcompute?view=azure-ml-py) cluster, which is a managed-compute infrastructure that allows the user to easily create a single or multi-node compute. To create the cluster, we need to specify the following parameters:\n", - "\n", - "- `vm_size`: The is the type of GPUs that we want to use in our cluster. For this tutorial, we will use **Standard_NC12s_v3 (NVIDIA V100) GPU Machines** .\n", - "- `idle_seconds_before_scaledown`: This is the number of seconds before a node will scale down in our auto-scaling cluster. We will set this to **6000** seconds. \n", - "- `min_nodes`: This is the minimum numbers of nodes that the cluster will have. To avoid paying for compute while they are not being used, we will set this to **0** nodes.\n", - "- `max_modes`: This is the maximum number of nodes that the cluster will scale up to. Will will set this to **2** nodes.\n", - "\n", - "**When jobs are submitted to the cluster it takes approximately 5 minutes to allocate new nodes** " - ], - "metadata": {} - }, - { - "cell_type": "code", - "source": [ - "from azureml.core.compute import AmlCompute, ComputeTarget\n", - "\n", - "cluster_name = 'train-gpu-nv6'\n", - "compute_config = AmlCompute.provisioning_configuration(vm_size='Standard_NC12s_v3', \n", - " idle_seconds_before_scaledown=6000,\n", - " min_nodes=0, \n", - " max_nodes=2)\n", - "\n", - "compute_target = ComputeTarget.create(workspace, cluster_name, compute_config)\n", - "compute_target.wait_for_completion(show_output=True)" - ], - "outputs": [], - "execution_count": null, - "metadata": { - "scrolled": true - } - }, - { - "cell_type": "markdown", - "source": [ - "To ensure our compute target was created successfully, we can check it's status." - ], - "metadata": {} - }, - { - "cell_type": "code", - "source": [ - "compute_target.get_status().serialize()" - ], - "outputs": [], - "execution_count": null, - "metadata": { - "scrolled": true - } - }, - { - "cell_type": "markdown", - "source": [ - "#### If the compute target has already been created, then you (and other users in your workspace) can directly run this cell." - ], - "metadata": {} - }, - { - "cell_type": "code", - "source": [ - "compute_target = workspace.compute_targets['train-gpu-nv6']" - ], - "outputs": [], - "execution_count": null, - "metadata": { - "scrolled": true - } - }, - { - "cell_type": "markdown", - "source": [ - "## Prepare Data Using Apache Spark\n", - "\n", - "To train our model, we used the Stackoverflow data dump from [Stack exchange archive](https://archive.org/download/stackexchange). Since the Stackoverflow _posts_ dataset is 12GB, we prepared the data using [Apache Spark](https://spark.apache.org/) framework on a scalable Spark compute cluster in [Azure Databricks](https://azure.microsoft.com/en-us/services/databricks/). \n", - "\n", - "For the purpose of this tutorial, we have processed the data ahead of time and uploaded it to an [Azure Blob Storage](https://azure.microsoft.com/en-us/services/storage/blobs/) container. The full data processing notebook can be found in the _spark_ folder.\n", - "\n", - "* **ACTION**: Open and explore [data preparation notebook](spark/stackoverflow-data-prep.ipynb).\n" - ], - "metadata": {} - }, - { - "cell_type": "markdown", - "source": [ - "## Register Datastore" - ], - "metadata": {} - }, - { - "cell_type": "markdown", - "source": [ - "A [Datastore](https://docs.microsoft.com/en-us/python/api/azureml-core/azureml.core.datastore.datastore?view=azure-ml-py) is used to store connection information to a central data storage. This allows you to access your storage without having to hard code this (potentially confidential) information into your scripts. \n", - "\n", - "In this tutorial, the data was been previously prepped and uploaded into a central [Blob Storage](https://azure.microsoft.com/en-us/services/storage/blobs/) container. We will register this container into our workspace as a datastore using a [shared access signature (SAS) token](https://docs.microsoft.com/en-us/azure/storage/common/storage-sas-overview). " - ], - "metadata": {} - }, - { - "cell_type": "code", - "source": [ - "from azureml.core import Datastore, Dataset\n", - "\n", - "datastore_name = 'tfworld'\n", - "container_name = 'azure-service-classifier'\n", - "account_name = 'johndatasets'\n", - "sas_token = '?sv=2019-02-02&ss=bfqt&srt=sco&sp=rl&se=2021-06-02T03:40:25Z&st=2020-03-09T19:40:25Z&spr=https&sig=bUwK7AJUj2c%2Fr90Qf8O1sojF0w6wRFgL2c9zMVCWNPA%3D'\n", - "\n", - "datastore = Datastore.register_azure_blob_container(workspace=workspace, \n", - " datastore_name=datastore_name, \n", - " container_name=container_name,\n", - " account_name=account_name, \n", - " sas_token=sas_token)" - ], - "outputs": [], - "execution_count": null, - "metadata": { - "scrolled": true - } - }, - { - "cell_type": "markdown", - "source": [ - "#### If the datastore has already been registered, then you (and other users in your workspace) can directly run this cell." - ], - "metadata": {} - }, - { - "cell_type": "code", - "source": [ - "datastore = workspace.datastores['tfworld']" - ], - "outputs": [], - "execution_count": null, - "metadata": { - "scrolled": true - } - }, - { - "cell_type": "markdown", - "source": [ - "#### What if my data wasn't already hosted remotely?\n", - "All workspaces also come with a blob container which is registered as a default datastore. This allows you to easily upload your own data to a remote storage location. You can access this datastore and upload files as follows:\n", - "```\n", - "datastore = workspace.get_default_datastore()\n", - "ds.upload(src_dir='', target_path='')\n", - "```\n" - ], - "metadata": {} - }, - { - "cell_type": "markdown", - "source": [ - "## Register Dataset\n", - "\n", - "Azure Machine Learning service supports first class notion of a Dataset. A [Dataset](https://docs.microsoft.com/en-us/python/api/azureml-core/azureml.core.dataset.dataset?view=azure-ml-py) is a resource for exploring, transforming and managing data in Azure Machine Learning. The following Dataset types are supported:\n", - "\n", - "* [TabularDataset](https://docs.microsoft.com/en-us/python/api/azureml-core/azureml.data.tabulardataset?view=azure-ml-py) represents data in a tabular format created by parsing the provided file or list of files.\n", - "\n", - "* [FileDataset](https://docs.microsoft.com/en-us/python/api/azureml-core/azureml.data.filedataset?view=azure-ml-py) references single or multiple files in datastores or from public URLs.\n", - "\n", - "First, we will use visual tools in Azure ML studio to register and explore our dataset as Tabular Dataset.\n", - "\n", - "* **ACTION**: Follow [create-dataset](images/create-dataset.ipynb) guide to create Tabular Dataset from our training data." - ], - "metadata": {} - }, - { - "cell_type": "markdown", - "source": [ - "## Register Dataset using SDK\n", - "\n", - "In addition to UI we can register datasets using SDK. In this workshop we will register second type of Datasets using code - File Dataset. File Dataset allows specific folder in our datastore that contains our data files to be registered as a Dataset.\n", - "\n", - "There is a folder within our datastore called **data** that contains all our training and testing data. We will register this as a dataset." - ], - "metadata": {} - }, - { - "cell_type": "code", - "source": [ - "azure_dataset = Dataset.File.from_files(path=(datastore, 'data'))\n", - "\n", - "azure_dataset = azure_dataset.register(workspace=workspace,\n", - " name='Azure Services Dataset',\n", - " description='Dataset containing azure related posts on Stackoverflow')" - ], - "outputs": [], - "execution_count": null, - "metadata": { - "scrolled": true - } - }, - { - "cell_type": "markdown", - "source": [ - "#### If the dataset has already been registered, then you (and other users in your workspace) can directly run this cell." - ], - "metadata": {} - }, - { - "cell_type": "code", - "source": [ - "azure_dataset = workspace.datasets['Azure Services Dataset']" - ], - "outputs": [], - "execution_count": null, - "metadata": { - "scrolled": true - } - }, - { - "cell_type": "markdown", - "source": [ - "## Explore Training Code" - ], - "metadata": {} - }, - { - "cell_type": "markdown", - "source": [ - "In this workshop the training code is provided in [train.py](./train.py) and [model.py](./model.py) files. The model is based on popular [huggingface/transformers](https://github.com/huggingface/transformers) libary. Transformers library provides performant implementation of BERT model with high level and easy to use APIs based on Tensorflow 2.0.\n", - "\n", - "![](https://raw.githubusercontent.com/huggingface/transformers/master/docs/source/imgs/transformers_logo_name.png)\n", - "\n", - "* **ACTION**: Explore _train.py_ and _model.py_ using [Azure ML studio > Notebooks tab](images/azuremlstudio-notebooks-explore.png)\n", - "* NOTE: You can also explore the files using Jupyter or Jupyter Lab UI." - ], - "metadata": {} - }, - { - "cell_type": "markdown", - "source": [ - "## Test Locally\n", - "\n", - "Let's try running the script locally to make sure it works before scaling up to use our compute cluster. To do so, you will need to install the transformers libary." - ], - "metadata": {} - }, - { - "cell_type": "code", - "source": [ - "%pip install transformers==2.0.0" - ], - "outputs": [], - "execution_count": null, - "metadata": {} - }, - { - "cell_type": "markdown", - "source": [ - "We have taken a small partition of the dataset and included it in this repository. Let's take a quick look at the format of the data." - ], - "metadata": {} - }, - { - "cell_type": "code", - "source": [ - "data_dir = './data'" - ], - "outputs": [], - "execution_count": null, - "metadata": {} - }, - { - "cell_type": "code", - "source": [ - "import os \n", - "import pandas as pd\n", - "data = pd.read_csv(os.path.join(data_dir, 'train.csv'), header=None)\n", - "data.head(5)" - ], - "outputs": [], - "execution_count": null, - "metadata": {} - }, - { - "cell_type": "markdown", - "source": [ - "Now we know what the data looks like, let's test out our script!" - ], - "metadata": {} - }, - { - "cell_type": "code", - "source": [ - "%%time\n", - "import sys\n", - "!{sys.executable} train.py --data_dir {data_dir} --max_seq_length 128 --batch_size 16 --learning_rate 3e-5 --steps_per_epoch 5 --num_epochs 1 --export_dir ../outputs/model" - ], - "outputs": [], - "execution_count": null, - "metadata": {} - }, - { - "cell_type": "markdown", - "source": [ - "## Debugging in TensorFlow 2.0 Eager Mode\n", - "\n", - "Eager mode is new feature in TensorFlow 2.0 which makes understanding and debugging models easy. Let's start by configuring our remote debugging environment.\n", - "\n", - "#### Configure VS Code Remote connection to Notebook VM\n", - "\n", - "* **ACTION**: Install [Microsoft VS Code](https://code.visualstudio.com/) on your local machine.\n", - "\n", - "* **ACTION**: Follow this [configuration guide](https://github.com/danielsc/azureml-debug-training/blob/master/Setting%20up%20VSCode%20Remote%20on%20an%20AzureML%20Notebook%20VM.md) to setup VS Code Remote connection to Notebook VM.\n", - "\n", - "#### Debug training code using step-by-step debugger\n", - "\n", - "* **ACTION**: Open Remote VS Code session to your Notebook VM.\n", - "* **ACTION**: Open file `/home/azureuser/cloudfiles/code//bert-stack-overflow/1-Training/train_eager.py`.\n", - "* **ACTION**: Set break point in the file and start Python debugging session. \n" - ], - "metadata": {} - }, - { - "cell_type": "markdown", - "source": [ - "On a CPU machine training on a full dataset will take approximatly 1.5 hours. Although it's a small dataset, it still takes a long time. Let's see how we can speed up the training by using latest NVidia V100 GPUs in the Azure cloud. " - ], - "metadata": {} - }, - { - "cell_type": "markdown", - "source": [ - "## Perform Experiment\n", - "\n", - "Now that we have our compute target, dataset, and training script working locally, it is time to scale up so that the script can run faster. We will start by creating an [experiment](https://docs.microsoft.com/en-us/python/api/azureml-core/azureml.core.experiment.experiment?view=azure-ml-py). An experiment is a grouping of many runs from a specified script. All runs in this tutorial will be performed under the same experiment. " - ], - "metadata": {} - }, - { - "cell_type": "code", - "source": [ - "from azureml.core import Experiment\n", - "\n", - "experiment_name = 'azure-service-classifier' \n", - "experiment = Experiment(workspace, name=experiment_name)" - ], - "outputs": [], - "execution_count": null, - "metadata": { - "scrolled": true - } - }, - { - "cell_type": "markdown", - "source": [ - "#### Create TensorFlow Estimator\n", - "\n", - "The Azure Machine Learning Python SDK Estimator classes allow you to easily construct run configurations for your experiments. They allow you too define parameters such as the training script to run, the compute target to run it on, framework versions, additional package requirements, etc. \n", - "\n", - "You can also use a generic [Estimator](https://docs.microsoft.com/en-us/python/api/azureml-train-core/azureml.train.estimator.estimator?view=azure-ml-py) to submit training scripts that use any learning framework you choose.\n", - "\n", - "For popular libaries like PyTorch and Tensorflow you can use their framework specific estimators. We will use the [TensorFlow Estimator](https://docs.microsoft.com/en-us/python/api/azureml-train-core/azureml.train.dnn.tensorflow?view=azure-ml-py) for our experiment." - ], - "metadata": {} - }, - { - "cell_type": "code", - "source": [ - "from azureml.core import Environment\n", - "\n", - "env = Environment.get(workspace, name='AzureML-TensorFlow-2.0-GPU')\n", - "env.python.conda_dependencies.add_pip_package(\"transformers==2.0.0\")\n", - "env.python.conda_dependencies.add_pip_package(\"absl-py\")\n", - "env.python.conda_dependencies.add_pip_package(\"azureml-dataprep\")\n", - "env.python.conda_dependencies.add_pip_package(\"h5py<3.0.0\")\n", - "env.python.conda_dependencies.add_pip_package(\"pandas\")\n", - "\n", - "env.name = \"Bert_training\"\n", - "env" - ], - "outputs": [], - "execution_count": null, - "metadata": {} - }, - { - "cell_type": "markdown", - "source": [ - "Let's go over how a Run is executed in Azure Machine Learning.\n", - "\n", - "![](./images/aml-run.png)" - ], - "metadata": {} - }, - { - "cell_type": "markdown", - "source": [ - "A quick description for each of the parameters we have just defined:\n", - "\n", - "- `source_directory`: This specifies the root directory of our source code. \n", - "- `entry_script`: This specifies the training script to run. It should be relative to the source_directory.\n", - "- `compute_target`: This specifies to compute target to run the job on. We will use the one created earlier.\n", - "- `script_params`: This specifies the input parameters to the training script. Please note:\n", - "\n", - " 1) *azure_dataset.as_named_input('azureservicedata').as_mount()* mounts the dataset to the remote compute and provides the path to the dataset on our datastore. \n", - " \n", - " 2) All outputs from the training script must be outputted to an './outputs' directory as this is the only directory that will be saved to the run. \n", - " \n", - " \n", - "- `framework_version`: This specifies the version of TensorFlow to use. Use Tensorflow.get_supported_verions() to see all supported versions.\n", - "- `use_gpu`: This will use the GPU on the compute target for training if set to True.\n", - "- `pip_packages`: This allows you to define any additional libraries to install before training." - ], - "metadata": {} - }, - { - "cell_type": "markdown", - "source": [ - "#### 1) Add Metrics Logging\n", - "\n", - "So we were able to clone a Tensorflow 2.0 project and run it without any changes. However, with larger scale projects we would want to log some metrics in order to make it easier to monitor the performance of our model. \n", - "\n", - "We can do this by adding a few lines of code into our training script:\n", - "\n", - "```python\n", - "# 1) Import SDK Run object\n", - "from azureml.core.run import Run\n", - "\n", - "# 2) Get current service context\n", - "run = Run.get_context()\n", - "\n", - "# 3) Log the metrics that we want\n", - "run.log('val_accuracy', float(logs.get('val_accuracy')))\n", - "run.log('accuracy', float(logs.get('accuracy')))\n", - "```\n", - "We've created a *train_logging.py* script that includes logging metrics as shown above. \n", - "\n", - "* **ACTION**: Explore _train_logging.py_ using [Azure ML studio > Notebooks tab](images/azuremlstudio-notebooks-explore.png)" - ], - "metadata": {} - }, - { - "cell_type": "markdown", - "source": [ - "We can submit this run in the same way that we did before. \n", - "\n", - "*Since our cluster can scale automatically to two nodes, we can run this job simultaneously with the previous one.*" - ], - "metadata": {} - }, - { - "cell_type": "code", - "source": [ - "from azureml.core import ScriptRun, ScriptRunConfig\n", - "\n", - "scriptrun = ScriptRunConfig(source_directory='./',\n", - " script='train_logging.py',\n", - " arguments=['--data_dir', azure_dataset.as_named_input('azureservicedata').as_mount(),\n", - " '--max_seq_length', 128,\n", - " '--batch_size', 32,\n", - " '--learning_rate', 3e-5,\n", - " '--steps_per_epoch', 5, # to reduce time for workshop\n", - " '--num_epochs', 2, # to reduce time for workshop\n", - " '--export_dir','./outputs/model'],\n", - " compute_target=locals,\n", - " environment=env)\n", - "\n", - "run = experiment.submit(scriptrun)" - ], - "outputs": [], - "execution_count": null, - "metadata": {} - }, - { - "cell_type": "markdown", - "source": [ - "Now if we view the current details of the run, you will notice that the metrics will be logged into graphs." - ], - "metadata": {} - }, - { - "cell_type": "code", - "source": [ - "from azureml.widgets import RunDetails\n", - "RunDetails(run).show()" - ], - "outputs": [], - "execution_count": null, - "metadata": { - "scrolled": true - } - }, - { - "cell_type": "markdown", - "source": [ - "#### 2) Monitoring metrics with Tensorboard\n", - "\n", - "Tensorboard is a popular Deep Learning Training visualization tool and it's built-in into TensorFlow framework. We can easily add tracking of the metrics in Tensorboard format by adding Tensorboard callback to the **fit** function call.\n", - "```python\n", - " # Add callback to record Tensorboard events\n", - " model.fit(train_dataset, epochs=FLAGS.num_epochs, \n", - " steps_per_epoch=FLAGS.steps_per_epoch, validation_data=valid_dataset, \n", - " callbacks=[\n", - " AmlLogger(),\n", - " tf.keras.callbacks.TensorBoard(update_freq='batch')]\n", - " )\n", - "```\n", - "\n", - "#### Launch Tensorboard\n", - "Azure ML service provides built-in integration with Tensorboard through **tensorboard** package.\n", - "\n", - "While the run is in progress (or after it has completed), we can start Tensorboard with the run as its target, and it will begin streaming logs." - ], - "metadata": {} - }, - { - "cell_type": "code", - "source": [ - "# from azureml.tensorboard import Tensorboard\n", - "\n", - "# # The Tensorboard constructor takes an array of runs, so be sure and pass it in as a single-element array here\n", - "# tb = Tensorboard([run])\n", - "\n", - "# # If successful, start() returns a string with the URI of the instance.\n", - "# tb.start()" - ], - "outputs": [], - "execution_count": null, - "metadata": {} - }, - { - "cell_type": "markdown", - "source": [ - "#### Stop Tensorboard\n", - "When you're done, make sure to call the stop() method of the Tensorboard object, or it will stay running even after your job completes." - ], - "metadata": {} - }, - { - "cell_type": "code", - "source": [ - "# tb.stop()" - ], - "outputs": [], - "execution_count": null, - "metadata": {} - }, - { - "cell_type": "markdown", - "source": [ - "## Check the model performance\n", - "\n", - "Last training run produced model of decent accuracy. Let's test it out and see what it does. First, let's check what files our latest training run produced and download the model files.\n", - "\n", - "#### Download model files" - ], - "metadata": {} - }, - { - "cell_type": "code", - "source": [ - "run.get_file_names()" - ], - "outputs": [], - "execution_count": null, - "metadata": {} - }, - { - "cell_type": "code", - "source": [ - "run.download_files(prefix='outputs/model')\n", - "\n", - "# If you haven't finished training the model then just download pre-made model from datastore\n", - "datastore.download('./',prefix=\"model\")" - ], - "outputs": [], - "execution_count": null, - "metadata": {} - }, - { - "cell_type": "markdown", - "source": [ - "#### Instantiate the model\n", - "\n", - "Next step is to import our model class and instantiate fine-tuned model from the model file." - ], - "metadata": {} - }, - { - "cell_type": "code", - "source": [ - "from model import TFBertForMultiClassification\n", - "from transformers import BertTokenizer\n", - "import tensorflow as tf\n", - "def encode_example(text, max_seq_length):\n", - " # Encode inputs using tokenizer\n", - " inputs = tokenizer.encode_plus(\n", - " text,\n", - " add_special_tokens=True,\n", - " max_length=max_seq_length\n", - " )\n", - " input_ids, token_type_ids = inputs[\"input_ids\"], inputs[\"token_type_ids\"]\n", - " # The mask has 1 for real tokens and 0 for padding tokens. Only real tokens are attended to.\n", - " attention_mask = [1] * len(input_ids)\n", - " # Zero-pad up to the sequence length.\n", - " padding_length = max_seq_length - len(input_ids)\n", - " input_ids = input_ids + ([0] * padding_length)\n", - " attention_mask = attention_mask + ([0] * padding_length)\n", - " token_type_ids = token_type_ids + ([0] * padding_length)\n", - " \n", - " return input_ids, attention_mask, token_type_ids\n", - " \n", - "labels = ['azure-web-app-service', 'azure-storage', 'azure-devops', 'azure-virtual-machine', 'azure-functions']\n", - "# Load model and tokenizer\n", - "loaded_model = TFBertForMultiClassification.from_pretrained('model', num_labels=len(labels))\n", - "tokenizer = BertTokenizer.from_pretrained('bert-base-cased')\n", - "print(\"Model loaded from disk.\")" - ], - "outputs": [], - "execution_count": null, - "metadata": {} - }, - { - "cell_type": "markdown", - "source": [ - "#### Define prediction function\n", - "\n", - "Using the model object we can interpret new questions and predict what Azure service they talk about. To do that conveniently we'll define **predict** function." - ], - "metadata": {} - }, - { - "cell_type": "code", - "source": [ - "# Prediction function\n", - "def predict(question):\n", - " input_ids, attention_mask, token_type_ids = encode_example(question, 128)\n", - " predictions = loaded_model.predict({\n", - " 'input_ids': tf.convert_to_tensor([input_ids], dtype=tf.int32),\n", - " 'attention_mask': tf.convert_to_tensor([attention_mask], dtype=tf.int32),\n", - " 'token_type_ids': tf.convert_to_tensor([token_type_ids], dtype=tf.int32)\n", - " })\n", - " prediction = labels[predictions[0].argmax().item()]\n", - " probability = predictions[0].max()\n", - " result = {\n", - " 'prediction': str(labels[predictions[0].argmax().item()]),\n", - " 'probability': str(predictions[0].max())\n", - " }\n", - " print('Prediction: {}'.format(prediction))\n", - " print('Probability: {}'.format(probability))" - ], - "outputs": [], - "execution_count": null, - "metadata": {} - }, - { - "cell_type": "markdown", - "source": [ - "#### Experiement with our new model\n", - "\n", - "Now we can easily test responses of the model to new inputs. \n", - "* **ACTION**: Invent yout own input for one of the 5 services our model understands: 'azure-web-app-service', 'azure-storage', 'azure-devops', 'azure-virtual-machine', 'azure-functions'." - ], - "metadata": {} - }, - { - "cell_type": "code", - "source": [ - "# Route question\n", - "predict(\"How can I specify Service Principal in devops pipeline when deploying virtual machine\")" - ], - "outputs": [], - "execution_count": null, - "metadata": {} - }, - { - "cell_type": "code", - "source": [ - "# Now more tricky cae - the opposite\n", - "predict(\"How can virtual machine trigger devops pipeline\")" - ], - "outputs": [], - "execution_count": null, - "metadata": {} - }, - { - "cell_type": "markdown", - "source": [ - "## Appendix\n", - "\n", - "__Followings are not for this workshop.__" - ], - "metadata": {} - }, - { - "cell_type": "markdown", - "source": [ - "## Distributed Training Across Multiple GPUs\n", - "\n", - "Distributed training allows us to train across multiple nodes if your cluster allows it. Azure Machine Learning service helps manage the infrastructure for training distributed jobs. All we have to do is add the following parameters to our estimator object in order to enable this:\n", - "\n", - "- `node_count`: The number of nodes to run this job across. Our cluster has a maximum node limit of 2, so we can set this number up to 2.\n", - "- `process_count_per_node`: The number of processes to enable per node. The nodes in our cluster have 2 GPUs each. We will set this value to 2 which will allow us to distribute the load on both GPUs. Using multi-GPUs nodes is benefitial as communication channel bandwidth on local machine is higher.\n", - "- `distributed_training`: The backend to use for our distributed job. We will be using an MPI (Message Passing Interface) backend which is used by Horovod framework.\n", - "\n", - "We use [Horovod](https://github.com/horovod/horovod), which is a framework that allows us to easily modifying our existing training script to be run across multiple nodes/GPUs. The distributed training script is saved as *train_horovod.py*.\n", - "\n", - "* **ACTION**: Explore _train_horovod.py_ using [Azure ML studio > Notebooks tab](images/azuremlstudio-notebooks-explore.png)" - ], - "metadata": {} - }, - { - "cell_type": "markdown", - "source": [ - "We can submit this run in the same way that we did with the others, but with the additional parameters." - ], - "metadata": {} - }, - { - "cell_type": "code", - "source": [ - "from azureml.core import ScriptRun, ScriptRunConfig\n", - "from azureml.core.runconfig import MpiConfiguration\n", - "\n", - "scriptrun3 = ScriptRunConfig(source_directory='./',\n", - " script='train_horovod.py',\n", - " arguments=['--data_dir', azure_dataset.as_named_input('azureservicedata').as_mount(),\n", - " '--max_seq_length', 128,\n", - " '--batch_size', 32,\n", - " '--learning_rate', 3e-5,\n", - " '--steps_per_epoch', 150,\n", - " '--num_epochs', 3,\n", - " '--export_dir','./outputs/model'],\n", - " compute_target=compute_target,\n", - " distributed_job_config=MpiConfiguration(node_count=2),\n", - " environment=env)\n", - "\n", - "run3 = experiment.submit(scriptrun3)" - ], - "outputs": [], - "execution_count": null, - "metadata": {} - }, - { - "cell_type": "markdown", - "source": [ - "Once again, we can view the current details of the run. " - ], - "metadata": {} - }, - { - "cell_type": "code", - "source": [ - "from azureml.widgets import RunDetails\n", - "RunDetails(run3).show()" - ], - "outputs": [], - "execution_count": null, - "metadata": { - "scrolled": true - } - }, - { - "cell_type": "markdown", - "source": [ - "Once the run completes note the time it took. It should be around 5 minutes. As you can see, by moving to the cloud GPUs and using distibuted training we managed to reduce training time of our model from more than an hour to 5 minutes. This greatly improves speed of experimentation and innovation." - ], - "metadata": {} - }, - { - "cell_type": "markdown", - "source": [ - "## Tune Hyperparameters Using Hyperdrive\n", - "\n", - "So far we have been putting in default hyperparameter values, but in practice we would need tune these values to optimize the performance. Azure Machine Learning service provides many methods for tuning hyperparameters using different strategies.\n", - "\n", - "The first step is to choose the parameter space that we want to search. We have a few choices to make here :\n", - "\n", - "- **Parameter Sampling Method**: This is how we select the combinations of parameters to sample. Azure Machine Learning service offers [RandomParameterSampling](https://docs.microsoft.com/en-us/python/api/azureml-train-core/azureml.train.hyperdrive.randomparametersampling?view=azure-ml-py), [GridParameterSampling](https://docs.microsoft.com/en-us/python/api/azureml-train-core/azureml.train.hyperdrive.gridparametersampling?view=azure-ml-py), and [BayesianParameterSampling](https://docs.microsoft.com/en-us/python/api/azureml-train-core/azureml.train.hyperdrive.bayesianparametersampling?view=azure-ml-py). We will use the `GridParameterSampling` method.\n", - "- **Parameters To Search**: We will be searching for optimal combinations of `learning_rate` and `num_epochs`.\n", - "- **Parameter Expressions**: This defines the [functions that can be used to describe a hyperparameter search space](https://docs.microsoft.com/en-us/python/api/azureml-train-core/azureml.train.hyperdrive.parameter_expressions?view=azure-ml-py), which can be discrete or continuous. We will be using a `discrete set of choices`.\n", - "\n", - "The following code allows us to define these options." - ], - "metadata": {} - }, - { - "cell_type": "code", - "source": [ - "from azureml.train.hyperdrive import GridParameterSampling\n", - "from azureml.train.hyperdrive.parameter_expressions import choice\n", - "\n", - "\n", - "param_sampling = GridParameterSampling( {\n", - " '--learning_rate': choice(3e-5, 3e-4),\n", - " '--num_epochs': choice(3, 4)\n", - " }\n", - ")" - ], - "outputs": [], - "execution_count": null, - "metadata": { - "scrolled": true - } - }, - { - "cell_type": "markdown", - "source": [ - "The next step is to a define how we want to measure our performance. We do so by specifying two classes:\n", - "\n", - "- **[PrimaryMetricGoal](https://docs.microsoft.com/en-us/python/api/azureml-train-core/azureml.train.hyperdrive.primarymetricgoal?view=azure-ml-py)**: We want to `MAXIMIZE` the `val_accuracy` that is logged in our training script.\n", - "- **[BanditPolicy](https://docs.microsoft.com/en-us/python/api/azureml-train-core/azureml.train.hyperdrive.banditpolicy?view=azure-ml-py)**: A policy for early termination so that jobs which don't show promising results will stop automatically." - ], - "metadata": {} - }, - { - "cell_type": "code", - "source": [ - "from azureml.train.hyperdrive import BanditPolicy\n", - "from azureml.train.hyperdrive import PrimaryMetricGoal\n", - "\n", - "primary_metric_name='val_accuracy'\n", - "primary_metric_goal=PrimaryMetricGoal.MAXIMIZE\n", - "\n", - "early_termination_policy = BanditPolicy(slack_factor = 0.1, evaluation_interval=1, delay_evaluation=2)" - ], - "outputs": [], - "execution_count": null, - "metadata": { - "scrolled": true - } - }, - { - "cell_type": "markdown", - "source": [ - "We define an estimator as usual, but this time without the script parameters that we are planning to search." - ], - "metadata": {} - }, - { - "cell_type": "code", - "source": [ - "from azureml.core import ScriptRun, ScriptRunConfig\n", - "\n", - "scriptrun4 = ScriptRunConfig(source_directory='./',\n", - " script='train_logging.py',\n", - " arguments=['--data_dir', azure_dataset.as_named_input('azureservicedata').as_mount(),\n", - " '--max_seq_length', 128,\n", - " '--batch_size', 32,\n", - " '--learning_rate', 3e-5,\n", - " '--steps_per_epoch', 150,\n", - " '--num_epochs', 3,\n", - " '--export_dir','./outputs/model'],\n", - " compute_target=compute_target,\n", - " environment=env)\n" - ], - "outputs": [], - "execution_count": null, - "metadata": {} - }, - { - "cell_type": "markdown", - "source": [ - "Finally, we add all our parameters in a [HyperDriveConfig](https://docs.microsoft.com/en-us/python/api/azureml-train-core/azureml.train.hyperdrive.hyperdriveconfig?view=azure-ml-py) class and submit it as a run. " - ], - "metadata": {} - }, - { - "cell_type": "code", - "source": [ - "from azureml.train.hyperdrive import HyperDriveConfig\n", - "\n", - "hyperdrive_run_config = HyperDriveConfig(run_config=scriptrun4, \n", - " hyperparameter_sampling=param_sampling, \n", - " policy=early_termination_policy, \n", - " primary_metric_name=primary_metric_name, \n", - " primary_metric_goal=PrimaryMetricGoal.MAXIMIZE, \n", - " max_total_runs=10,\n", - " max_concurrent_runs=2)\n", - "\n", - "run4 = experiment.submit(hyperdrive_run_config)" - ], - "outputs": [], - "execution_count": null, - "metadata": {} - }, - { - "cell_type": "markdown", - "source": [ - "When we view the details of our run this time, we will see information and metrics for every run in our hyperparameter tuning." - ], - "metadata": {} - }, - { - "cell_type": "code", - "source": [ - "from azureml.widgets import RunDetails\n", - "RunDetails(run4).show()" - ], - "outputs": [], - "execution_count": null, - "metadata": { - "scrolled": false - } - }, - { - "cell_type": "markdown", - "source": [ - "We can retrieve the best run based on our defined metric." - ], - "metadata": {} - }, - { - "cell_type": "code", - "source": [ - "best_run = run4.get_best_run_by_primary_metric()" - ], - "outputs": [], - "execution_count": null, - "metadata": { - "scrolled": true - } - }, - { - "cell_type": "markdown", - "source": [ - "## Register Model\n", - "\n", - "A registered [model](https://docs.microsoft.com/en-us/python/api/azureml-core/azureml.core.model(class)?view=azure-ml-py) is a reference to the directory or file that make up your model. After registering a model, you and other people in your workspace can easily gain access to and deploy your model without having to run the training script again. \n", - "\n", - "We need to define the following parameters to register a model:\n", - "\n", - "- `model_name`: The name for your model. If the model name already exists in the workspace, it will create a new version for the model.\n", - "- `model_path`: The path to where the model is stored. In our case, this was the *export_dir* defined in our estimators.\n", - "- `description`: A description for the model.\n", - "\n", - "Let's register the best run from our hyperparameter tuning." - ], - "metadata": {} - }, - { - "cell_type": "code", - "source": [ - "model = best_run.register_model(model_name='azure-service-classifier', \n", - " model_path='./outputs/model',\n", - " datasets=[('train, test, validation data', azure_dataset)],\n", - " description='BERT model for classifying azure services on stackoverflow posts.')" - ], - "outputs": [], - "execution_count": null, - "metadata": { - "scrolled": true - } - }, - { - "cell_type": "markdown", - "source": [ - "We have registered the model with Dataset reference. \n", - "* **ACTION**: Check dataset to model link in **Azure ML studio > Datasets tab > Azure Service Dataset**." - ], - "metadata": {} - }, - { - "cell_type": "markdown", - "source": [ - "In the [next tutorial](), we will perform inferencing on this model and deploy it to a web service." - ], - "metadata": {} - } - ], - "metadata": { - "pygments_lexer": "ipython3", - "name": "python", - "mimetype": "text/x-python", - "npconvert_exporter": "python", - "kernel_info": { - "name": "python3-azureml" - }, - "language_info": { - "name": "python", - "version": "3.6.9", - "mimetype": "text/x-python", - "codemirror_mode": { - "name": "ipython", - "version": 3 - }, - "pygments_lexer": "ipython3", - "nbconvert_exporter": "python", - "file_extension": ".py" - }, - "version": 3, - "kernelspec": { - "name": "python3-azureml", - "language": "python", - "display_name": "Python 3.6 - AzureML" - }, - "file_extension": ".py", - "nteract": { - "version": "nteract-front-end@1.0.0" - } - }, - "nbformat": 4, - "nbformat_minor": 2 -} \ No newline at end of file diff --git a/1-Training/data/.amlignore b/1-Training/data/.amlignore new file mode 100644 index 0000000..0fa594b --- /dev/null +++ b/1-Training/data/.amlignore @@ -0,0 +1,6 @@ +## This file was auto generated by the Azure Machine Learning Studio. Please do not remove. +## Read more about the .amlignore file here: https://docs.microsoft.com/azure/machine-learning/how-to-save-write-experiment-files#storage-limits-of-experiment-snapshots + +.ipynb_aml_checkpoints/ +*.amltmp +*.amltemp \ No newline at end of file diff --git a/1-Training/data/.amlignore.amltmp b/1-Training/data/.amlignore.amltmp new file mode 100644 index 0000000..0fa594b --- /dev/null +++ b/1-Training/data/.amlignore.amltmp @@ -0,0 +1,6 @@ +## This file was auto generated by the Azure Machine Learning Studio. Please do not remove. +## Read more about the .amlignore file here: https://docs.microsoft.com/azure/machine-learning/how-to-save-write-experiment-files#storage-limits-of-experiment-snapshots + +.ipynb_aml_checkpoints/ +*.amltmp +*.amltemp \ No newline at end of file diff --git a/1-Training/requirements.txt b/1-Training/requirements.txt deleted file mode 100644 index b25867f..0000000 --- a/1-Training/requirements.txt +++ /dev/null @@ -1,500 +0,0 @@ -absl-py==0.9.0 -adal==1.2.4 -adlfs==0.5.9 -aiohttp==3.6.2 -alabaster==0.7.12 -alembic==1.4.2 -anaconda-client==1.7.2 -anaconda-project==0.8.3 -ansiwrap==0.8.4 -antlr4-python3-runtime==4.7.2 -applicationinsights==0.11.9 -argcomplete==1.11.1 -asn1crypto==1.0.1 -astor==0.8.1 -astroid==2.3.1 -astropy==3.2.1 -async-timeout==3.0.1 -atari-py==0.2.6 -atomicwrites==1.3.0 -attrs==19.3.0 -autopep8==1.5.3 -azure-batch==9.0.0 -azure-cli==2.8.0 -azure-cli-command-modules-nspkg==2.0.3 -azure-cli-core==2.8.0 -azure-cli-nspkg==3.0.4 -azure-cli-telemetry==1.0.4 -azure-common==1.1.25 -azure-core==1.9.0 -azure-cosmos==3.1.2 -azure-datalake-store==0.0.48 -azure-functions-devops-build==0.0.22 -azure-graphrbac==0.61.1 -azure-identity==1.2.0 -azure-keyvault==1.1.0 -azure-loganalytics==0.1.0 -azure-mgmt-advisor==2.0.1 -azure-mgmt-apimanagement==0.1.0 -azure-mgmt-appconfiguration==0.4.0 -azure-mgmt-applicationinsights==0.1.1 -azure-mgmt-authorization==0.60.0 -azure-mgmt-batch==9.0.0 -azure-mgmt-batchai==2.0.0 -azure-mgmt-billing==0.2.0 -azure-mgmt-botservice==0.2.0 -azure-mgmt-cdn==4.1.0rc1 -azure-mgmt-cognitiveservices==6.2.0 -azure-mgmt-compute==12.1.0 -azure-mgmt-consumption==2.0.0 -azure-mgmt-containerinstance==1.5.0 -azure-mgmt-containerregistry==2.8.0 -azure-mgmt-containerservice==9.0.1 -azure-mgmt-core==1.2.2 -azure-mgmt-cosmosdb==0.14.0 -azure-mgmt-datalake-analytics==0.2.1 -azure-mgmt-datalake-nspkg==3.0.1 -azure-mgmt-datalake-store==0.5.0 -azure-mgmt-datamigration==0.1.0 -azure-mgmt-deploymentmanager==0.2.0 -azure-mgmt-devtestlabs==4.0.0 -azure-mgmt-dns==2.1.0 -azure-mgmt-eventgrid==2.2.0 -azure-mgmt-eventhub==4.0.0 -azure-mgmt-hdinsight==1.4.0 -azure-mgmt-imagebuilder==0.4.0 -azure-mgmt-iotcentral==3.0.0 -azure-mgmt-iothub==0.12.0 -azure-mgmt-iothubprovisioningservices==0.2.0 -azure-mgmt-keyvault==2.2.0 -azure-mgmt-kusto==0.3.0 -azure-mgmt-loganalytics==0.6.0 -azure-mgmt-managedservices==1.0.0 -azure-mgmt-managementgroups==0.2.0 -azure-mgmt-maps==0.1.0 -azure-mgmt-marketplaceordering==0.2.1 -azure-mgmt-media==2.2.0 -azure-mgmt-monitor==0.9.0 -azure-mgmt-msi==0.2.0 -azure-mgmt-netapp==0.8.0 -azure-mgmt-network==17.0.0 -azure-mgmt-nspkg==3.0.2 -azure-mgmt-policyinsights==0.4.0 -azure-mgmt-privatedns==0.1.0 -azure-mgmt-rdbms==2.2.0 -azure-mgmt-recoveryservices==0.4.0 -azure-mgmt-recoveryservicesbackup==0.6.0 -azure-mgmt-redhatopenshift==0.1.0 -azure-mgmt-redis==7.0.0rc1 -azure-mgmt-relay==0.1.0 -azure-mgmt-reservations==0.6.0 -azure-mgmt-resource==10.1.0 -azure-mgmt-search==2.1.0 -azure-mgmt-security==0.4.1 -azure-mgmt-servicebus==0.6.0 -azure-mgmt-servicefabric==0.4.0 -azure-mgmt-signalr==0.3.0 -azure-mgmt-sql==0.18.0 -azure-mgmt-sqlvirtualmachine==0.5.0 -azure-mgmt-storage==11.1.0 -azure-mgmt-trafficmanager==0.51.0 -azure-mgmt-web==0.46.0 -azure-multiapi-storage==0.3.5 -azure-nspkg==3.0.2 -azure-storage-blob==12.6.0 -azureml-accel-models==1.9.0 -azureml-automl-core==1.11.0 -azureml-automl-runtime==1.11.0 -azureml-cli-common==1.9.0 -azureml-contrib-dataset==1.9.0 -azureml-contrib-fairness==1.9.0 -azureml-contrib-gbdt==1.9.0 -azureml-contrib-interpret==1.9.0 -azureml-contrib-notebook==1.11.0 -azureml-contrib-pipeline-steps==1.9.0 -azureml-contrib-reinforcementlearning==1.9.0 -azureml-contrib-server==1.9.0 -azureml-contrib-services==1.9.0 -azureml-core==1.23.0 -azureml-datadrift==1.9.0 -azureml-dataprep==2.0.2 -azureml-dataprep-native==14.2.1 -azureml-dataset-runtime==1.11.0.post1 -azureml-defaults==1.11.0 -azureml-explain-model==1.11.0 -azureml-interpret==1.11.0 -azureml-mlflow==1.9.0 -azureml-model-management-sdk==1.0.1b6.post1 -azureml-monitoring==0.1.0a18 -azureml-opendatasets==1.9.0 -azureml-pipeline==1.11.0 -azureml-pipeline-core==1.11.0 -azureml-pipeline-steps==1.11.0 -azureml-samples @ file:///mnt/jupyter-azsamples -azureml-sdk==1.11.0 -azureml-telemetry==1.11.0 -azureml-tensorboard==1.9.0 -azureml-train==1.11.0 -azureml-train-automl==1.11.0 -azureml-train-automl-client==1.11.0 -azureml-train-automl-runtime==1.11.0.post1 -azureml-train-core==1.11.0 -azureml-train-restclients-hyperdrive==1.11.0 -azureml-widgets==1.11.0 -Babel==2.7.0 -backcall==0.2.0 -backports.os==0.1.1 -backports.shutil-get-terminal-size==1.0.0 -backports.tempfile==1.0 -backports.weakref==1.0.post1 -bcrypt==3.1.7 -beautifulsoup4==4.8.0 -bitarray==1.0.1 -bkcharts==0.2 -bleach==3.1.5 -blessings==1.7 -bokeh==2.2.3 -boto==2.49.0 -boto3==1.14.18 -botocore==1.17.18 -Bottleneck==1.2.1 -cachetools==4.1.1 -certifi==2020.12.5 -cffi==1.14.0 -chardet==3.0.4 -click==7.1.2 -cloudpickle @ file:///tmp/build/80754af9/cloudpickle_1598884132938/work -clyent==1.2.2 -colorama==0.4.3 -configparser==3.7.4 -contextlib2==0.6.0.post1 -contextvars==2.4 -convertdate @ file:///home/conda/feedstock_root/build_artifacts/convertdate_1589287890831/work -coremltools @ git+https://github.com/apple/coremltools@13c064ed99ab1da7abea0196e4ddf663ede48aad -cryptography==2.9.2 -cssselect==1.1.0 -cycler==0.10.0 -Cython==0.29.20 -cytoolz==0.10.0 -dask==2021.2.0 -dask-glm==0.2.0 -dask-ml==1.8.0 -databricks-cli==0.11.0 -decorator==4.4.2 -defusedxml==0.6.0 -dill==0.3.2 -distributed==2021.2.0 -distro==1.5.0 -dm-tree==0.1.5 -docker==4.2.2 -docutils==0.15.2 -dotnetcore2==2.1.14 -encrypted-inference==0.9 -entrypoints==0.3 -enum34==1.1.10 -et-xmlfile==1.0.1 -fabric==2.5.0 -fairlearn==0.4.6 -fastcache==1.1.0 -filelock==3.0.12 -fire==0.3.1 -flake8==3.7.9 -Flask==1.0.3 -fsspec==0.8.5 -fusepy==3.0.1 -future==0.18.2 -gast==0.2.2 -gensim==3.8.3 -gevent==1.4.0 -gitdb==4.0.5 -GitPython==3.1.3 -glob2==0.7 -gmpy2==2.0.8 -google==2.0.3 -google-auth==1.18.0 -google-auth-oauthlib==0.4.1 -google-pasta==0.2.0 -gorilla==0.3.0 -gpustat==0.6.0 -graphviz==0.16 -greenlet==0.4.15 -grpcio==1.30.0 -gunicorn==19.9.0 -gym==0.17.2 -h5py==2.9.0 -HeapDict==1.0.1 -holidays==0.9.11 -horovod==0.16.1 -html5lib==1.0.1 -humanfriendly==8.2 -idna==2.10 -idna-ssl==1.1.0 -imageio==2.6.0 -imagesize==1.1.0 -immutables @ file:///tmp/build/80754af9/immutables_1592425991038/work -importlib-metadata==1.7.0 -interpret-community==0.14.1 -interpret-core==0.1.21 -invoke==1.4.1 -ipykernel==5.3.1 -ipython==7.16.1 -ipython-genutils==0.2.0 -ipywidgets==7.5.1 -isodate==0.6.0 -isort==4.3.21 -itsdangerous==1.1.0 -javaproperties==0.5.1 -jdcal==1.4.1 -jedi==0.15.1 -jeepney==0.4.3 -Jinja2==2.11.2 -jmespath==0.10.0 -joblib==0.14.1 -jsmin==2.2.2 -json-logging-py==0.2 -json5==0.8.5 -jsondiff==1.2.0 -jsonpickle==1.4.1 -jsonschema==3.2.0 -jupyter==1.0.0 -jupyter-client==6.1.5 -jupyter-console==6.0.0 -jupyter-core==4.6.3 -jupyterlab==2.1.4 -jupyterlab-server @ file:///home/conda/feedstock_root/build_artifacts/jupyterlab_server_1593951277307/work -jupytext==1.5.1 -Keras==2.3.1 -Keras-Applications==1.0.8 -Keras-Preprocessing==1.1.2 -keras2onnx==1.6.0 -keyring==18.0.0 -kiwisolver==1.2.0 -knack==0.7.1 -lazy-object-proxy==1.4.2 -liac-arff==2.4.0 -libarchive-c==2.8 -lief==0.9.0 -lightgbm==2.3.0 -llvmlite==0.29.0 -locket==0.2.0 -lunardate==0.2.0 -lxml==4.4.1 -lz4==3.1.0 -Mako==1.1.3 -Markdown==3.2.2 -MarkupSafe==1.1.1 -matplotlib @ file:///tmp/build/80754af9/matplotlib-base_1592846044287/work -mccabe==0.6.1 -mistune==0.8.4 -mkl-fft==1.0.14 -mkl-random==1.1.0 -mkl-service==2.3.0 -mlflow==1.9.1 -mock==3.0.5 -more-itertools==7.2.0 -mpmath==1.1.0 -msal==1.4.1 -msal-extensions==0.1.3 -msgpack==1.0.2 -msrest==0.6.17 -msrestazure==0.6.4 -multidict==4.7.6 -multipledispatch==0.6.0 -nb-conda-kernels==2.2.3 -nbconvert==5.6.1 -nbformat==5.0.7 -ndg-httpsclient==0.5.1 -networkx==2.3 -nimbusml==1.7.1 -nltk==3.4.5 -nose==1.3.7 -notebook==6.0.3 -numba==0.45.1 -numexpr==2.7.0 -numpy==1.19.5 -numpydoc==0.9.1 -nvidia-ml-py3==7.352.0 -oauthlib==3.1.0 -olefile==0.46 -onnx==1.6.0 -onnxconverter-common==1.6.0 -onnxmltools==1.4.1 -onnxruntime==1.0.0 -opencv-python==4.3.0.36 -opencv-python-headless==4.3.0.36 -openpyxl==3.0.0 -opt-einsum==3.2.1 -packaging==20.4 -pandas==1.1.5 -pandas-ml==0.6.1 -pandocfilters==1.4.2 -papermill==1.2.1 -paramiko==2.7.1 -parsel==1.6.0 -parso==0.7.0 -partd==1.0.0 -path.py==12.0.1 -pathlib2==2.3.5 -pathspec==0.8.0 -patsy==0.5.1 -pep8==1.7.1 -pexpect==4.7.0 -pickleshare==0.7.5 -Pillow==8.1.0 -pkginfo==1.5.0.1 -pluggy==0.13.0 -ply==3.11 -pmdarima==1.1.1 -portalocker==1.7.0 -prometheus-client==0.8.0 -prometheus-flask-exporter==0.14.1 -prompt-toolkit==2.0.10 -protobuf==3.12.2 -psutil==5.7.0 -ptyprocess==0.6.0 -py==1.8.0 -py-cpuinfo==5.0.0 -py-spy==0.3.3 -py4j==0.10.9 -pyarrow==0.17.1 -pyasn1==0.4.8 -pyasn1-modules==0.2.8 -pycodestyle==2.5.0 -pycosat==0.6.3 -pycparser==2.20 -pycrypto==2.6.1 -pycurl==7.43.0.3 -pydocstyle==5.0.2 -pyflakes==2.1.1 -pyglet==1.5.0 -Pygments==2.6.1 -PyJWT==1.7.1 -pylint==2.4.2 -PyMeeus @ file:///home/conda/feedstock_root/build_artifacts/pymeeus_1589222711601/work -PyNaCl==1.4.0 -pyodbc==4.0.27 -pyOpenSSL==19.1.0 -pyparsing==2.4.7 -pyrsistent==0.16.0 -PySocks==1.7.1 -pyspark==3.0.0 -pystan==2.19.0.0 -pytest==5.4.3 -pytest-arraydiff==0.3 -pytest-astropy==0.5.0 -pytest-doctestplus==0.4.0 -pytest-openfiles==0.4.0 -pytest-remotedata==0.3.2 -python-dateutil==2.8.1 -python-editor==1.0.4 -python-jsonrpc-server==0.3.4 -python-language-server==0.30.0 -pytorch-transformers==1.0.0 -pytz==2019.3 -PyWavelets==1.1.1 -PyYAML==5.1.2 -pyzmq==19.0.1 -QtAwesome==0.6.0 -qtconsole==4.5.5 -QtPy==1.9.0 -querystring-parser==1.2.4 -ray==0.8.6 -redis==3.4.1 -regex==2020.6.8 -requests==2.24.0 -requests-oauthlib==1.3.0 -rope==0.14.0 -rsa==4.6 -ruamel-yaml==0.15.46 -ruamel.yaml.clib==0.2.0 -s3transfer==0.3.3 -sacremoses==0.0.45 -scikit-image==0.17.2 -scikit-learn==0.24.1 -scipy==1.4.1 -scp==0.13.2 -scrapbook==0.2.0 -seaborn==0.9.0 -SecretStorage==3.1.2 -Send2Trash==1.5.0 -sentencepiece==0.1.91 -setuptools-git==1.2 -shap==0.34.0 -simplegeneric==0.8.1 -singledispatch==3.4.0.3 -six==1.12.0 -skl2onnx==1.4.9 -sklearn==0.0 -sklearn-pandas==1.7.0 -smart-open==1.9.0 -smmap==3.0.4 -snowballstemmer==2.0.0 -sortedcollections==1.1.2 -sortedcontainers==2.1.0 -soupsieve==1.9.3 -Sphinx==2.2.0 -sphinxcontrib-applehelp==1.0.1 -sphinxcontrib-devhelp==1.0.1 -sphinxcontrib-htmlhelp==1.0.2 -sphinxcontrib-jsmath==1.0.1 -sphinxcontrib-qthelp==1.0.2 -sphinxcontrib-serializinghtml==1.1.3 -sphinxcontrib-websupport==1.1.2 -spyder==3.3.0 -spyder-kernels==0.5.2 -SQLAlchemy==1.3.13 -sqlparse==0.3.1 -sshtunnel==0.1.5 -statsmodels==0.10.2 -sympy==1.4 -tables==3.5.2 -tabulate==0.8.7 -tblib @ file:///tmp/build/80754af9/tblib_1597928476713/work -tenacity==6.2.0 -tensorboard==2.2.2 -tensorboard-plugin-wit==1.7.0 -tensorboardX==2.1 -tensorflow==2.1.0 -tensorflow-estimator==2.1.0 -tensorflow-gpu==2.1.0 -termcolor==1.1.0 -terminado==0.8.2 -testpath==0.4.4 -textwrap3==0.9.2 -threadpoolctl==2.1.0 -tifffile==2020.7.4 -toml==0.10.1 -toolz==0.10.0 -torch==1.4.0 -torchvision==0.5.0 -tornado==6.1 -tqdm==4.47.0 -traitlets==4.3.3 -transformers==2.0.0 -typed-ast==1.4.0 -typing-extensions==3.7.4.2 -ujson==1.35 -unicodecsv==0.14.1 -urllib3==1.25.9 -vsts==0.1.25 -vsts-cd-manager==1.0.2 -w3lib==1.22.0 -waitress==1.4.4 -wcwidth==0.2.5 -webencodings==0.5.1 -websocket-client==0.57.0 -websockets==8.1 -Werkzeug==0.16.1 -widgetsnbextension==3.5.1 -wikiextractor==3.0.4 -wrapt==1.11.2 -wurlitzer==1.0.3 -xgboost==0.90 -xlrd==1.2.0 -XlsxWriter==1.2.1 -xlwt==1.3.0 -xmltodict==0.12.0 -yapf==0.30.0 -yarl==1.4.2 -zict==1.0.0 -zipp==3.1.0 diff --git a/2-Inferencing/AzureServiceClassifier_Inferencing.ipynb b/2-Inferencing/AzureServiceClassifier_Inferencing.ipynb index c8039b3..f2b6217 100644 --- a/2-Inferencing/AzureServiceClassifier_Inferencing.ipynb +++ b/2-Inferencing/AzureServiceClassifier_Inferencing.ipynb @@ -41,28 +41,13 @@ " * Clean up Webservice" ] }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "## What is Azure Machine Learning Service?\n", - "Azure Machine Learning service is a cloud service that you can use to develop and deploy machine learning models. Using Azure Machine Learning service, you can track your models as you build, train, deploy, and manage them, all at the broad scale that the cloud provides.\n", - "![](./images/aml-overview.png)\n", - "\n", - "\n", - "#### How can we use Azure Machine Learning SDK for deployment and inferencing of a machine learning models?\n", - "Deployment and inferencing of a machine learning model, is often an cumbersome process. Once you a trained model and a scoring script working on your local machine, you will want to deploy this model as a web service.\n", - "\n", - "To facilitate deployment and inferencing, the Azure Machine Learning Python SDK provides a high-level abstraction for model deployment of a web service running on your [local](https://docs.microsoft.com/en-us/azure/machine-learning/service/how-to-deploy-and-where#local) machine, in Azure Container Instance ([ACI](https://azure.microsoft.com/en-us/services/container-instances/)) or Azure Kubernetes Service ([AKS](https://azure.microsoft.com/en-us/services/kubernetes-service/)), which allows users to easily deploy their models in the Azure ecosystem." - ] - }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Prerequisites\n", "* Understand the [architecture and terms](https://docs.microsoft.com/en-us/azure/machine-learning/service/concept-azure-machine-learning-architecture) introduced by Azure Machine Learning\n", - "* If you are using an Azure Machine Learning Notebook VM, you are all set. Otherwise, go through the [configuration notebook](https://docs.microsoft.com/en-us/azure/machine-learning/service/tutorial-1st-experiment-sdk-setup) to:\n", + "* If you are using an Azure Machine Learning Compute Instance, you are all set. Otherwise, go through the [configuration notebook](https://docs.microsoft.com/en-us/azure/machine-learning/service/tutorial-1st-experiment-sdk-setup) to:\n", " * Install the AML SDK\n", " * Create a workspace and its configuration file (config.json)\n", "* For local scoring test, you will also need to have Tensorflow and Keras installed in the current Jupyter kernel.\n", @@ -74,49 +59,6 @@ "```" ] }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "#### Enable Docker for non-root users" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "# !sudo usermod -a -G docker $USER\n", - "# !newgrp docker" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "Check if you have the correct permissions to run Docker. Running the line below should print:\n", - "```\n", - "CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES\n", - "```" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "# !docker ps" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - ">**Note:** Make you shutdown your Jupyter notebook to enable this access. Go to the your [Jupyter dashboard](/tree) and click `Quit` on the top right corner. After the shutdown, the Notebook will be automatically refereshed with the new permissions." - ] - }, { "cell_type": "markdown", "metadata": {}, @@ -139,7 +81,7 @@ "\n", "If you are running this on a Notebook VM, the Azure Machine Learning Python SDK is installed by default. If you are running this locally, you can follow these [instructions](https://docs.microsoft.com/en-us/python/api/overview/azure/ml/install?view=azure-ml-py) to install it using pip.\n", "\n", - "This tutorial requires version 1.0.69 or higher. We can import the Python SDK to ensure it has been properly installed:" + "This tutorial requires version 1.27.0 or higher. We can import the Python SDK to ensure it has been properly installed:" ] }, { @@ -178,46 +120,6 @@ " 'Resource group: ' + ws.resource_group, sep = '\\n')" ] }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "## Register Datastore\n", - "A [Datastore](https://docs.microsoft.com/en-us/python/api/azureml-core/azureml.core.datastore.datastore?view=azure-ml-py) is used to store connection information to a central data storage. This allows you to access your storage without having to hard code this (potentially confidential) information into your scripts. \n", - "\n", - "In this tutorial, the model was been previously prepped and uploaded into a central [Blob Storage](https://azure.microsoft.com/en-us/services/storage/blobs/) container. We will register this container into our workspace as a datastore using a [shared access signature (SAS) token](https://docs.microsoft.com/en-us/azure/storage/common/storage-sas-overview). \n", - "\n", - "\n", - "\n", - "We need to define the following parameters to register a datastore:\n", - "\n", - "- `ws`: The workspace object\n", - "- `datastore_name`: The name of the datastore, case insensitive, can only contain alphanumeric characters and _.\n", - "- `container_name`: The name of the azure blob container.\n", - "- `account_name`: The storage account name.\n", - "- `sas_token`: An account SAS token, defaults to None.\n" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "from azureml.core.datastore import Datastore\n", - "\n", - "datastore_name = 'tfworld'\n", - "container_name = 'azure-service-classifier'\n", - "account_name = 'johndatasets'\n", - "sas_token = '?sv=2019-02-02&ss=bfqt&srt=sco&sp=rl&se=2021-06-02T03:40:25Z&st=2020-03-09T19:40:25Z&spr=https&sig=bUwK7AJUj2c%2Fr90Qf8O1sojF0w6wRFgL2c9zMVCWNPA%3D'\n", - "\n", - "datastore = Datastore.register_azure_blob_container(workspace=ws, \n", - " datastore_name=datastore_name, \n", - " container_name=container_name,\n", - " account_name=account_name, \n", - " sas_token=sas_token)" - ] - }, { "cell_type": "markdown", "metadata": {}, @@ -231,7 +133,10 @@ "metadata": {}, "outputs": [], "source": [ - "datastore = ws.datastores['tfworld']" + "from azureml.core import Datastore\n", + "\n", + "datastore = Datastore.get(ws, 'mtcseattle')\n", + "datastore" ] }, { @@ -292,314 +197,6 @@ "```" ] }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "## Inferencing on the test set\n", - "Let's check the version of the local Keras. Make sure it matches with the version number printed out in the training script. Otherwise you might not be able to load the model properly." - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "import keras\n", - "import tensorflow as tf\n", - "\n", - "print(\"Keras version:\", keras.__version__)\n", - "print(\"Tensorflow version:\", tf.__version__)" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "#### Install Transformers Library\n", - "We have trained BERT model using Tensorflow 2.0 and the open source [huggingface/transformers](https://github.com/huggingface/transformers) libary. So before we can load the model we need to make sure we have also installed the Transformers Library." - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "%pip install transformers" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "#### Load the Tensorflow 2.0 BERT model.\n", - "Load the downloaded Tensorflow 2.0 BERT model" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "from transformers import BertTokenizer, TFBertPreTrainedModel, TFBertMainLayer\n", - "from transformers.modeling_tf_utils import get_initializer\n", - "class TFBertForMultiClassification(TFBertPreTrainedModel):\n", - " def __init__(self, config, *inputs, **kwargs):\n", - " super(TFBertForMultiClassification, self).__init__(config, *inputs, **kwargs)\n", - " self.num_labels = config.num_labels\n", - " self.bert = TFBertMainLayer(config, name='bert')\n", - " self.dropout = tf.keras.layers.Dropout(config.hidden_dropout_prob)\n", - " self.classifier = tf.keras.layers.Dense(config.num_labels,\n", - " kernel_initializer=get_initializer(config.initializer_range),\n", - " name='classifier',\n", - " activation='softmax')\n", - " def call(self, inputs, **kwargs):\n", - " outputs = self.bert(inputs, **kwargs)\n", - " pooled_output = outputs[1]\n", - " pooled_output = self.dropout(pooled_output, training=kwargs.get('training', False))\n", - " logits = self.classifier(pooled_output)\n", - " outputs = (logits,) + outputs[2:] # add hidden states and attention if they are here\n", - " return outputs # logits, (hidden_states), (attentions)\n", - " \n", - "max_seq_length = 128\n", - "labels = ['azure-web-app-service', 'azure-storage', 'azure-devops', 'azure-virtual-machine', 'azure-functions']\n", - "loaded_model = TFBertForMultiClassification.from_pretrained(model_dir, num_labels=len(labels))\n", - "tokenizer = BertTokenizer.from_pretrained('bert-base-cased')\n", - "print(\"Model loaded from disk.\")" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "Feed in test sentence to test the BERT model. And time the duration of the prediction." - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "%%time\n", - "import json \n", - "\n", - "# Input test sentences\n", - "raw_data = json.dumps({\n", - " 'text': 'My VM is not working'\n", - "})\n", - "\n", - "# Encode inputs using tokenizer\n", - "inputs = tokenizer.encode_plus(\n", - " json.loads(raw_data)['text'],\n", - " add_special_tokens=True,\n", - " max_length=max_seq_length\n", - " )\n", - "input_ids, token_type_ids = inputs[\"input_ids\"], inputs[\"token_type_ids\"]\n", - "\n", - "# The mask has 1 for real tokens and 0 for padding tokens. Only real tokens are attended to.\n", - "attention_mask = [1] * len(input_ids)\n", - "\n", - "# Zero-pad up to the sequence length.\n", - "padding_length = max_seq_length - len(input_ids)\n", - "input_ids = input_ids + ([0] * padding_length)\n", - "attention_mask = attention_mask + ([0] * padding_length)\n", - "token_type_ids = token_type_ids + ([0] * padding_length)\n", - " \n", - "# Make prediction\n", - "predictions = loaded_model.predict({\n", - " 'input_ids': tf.convert_to_tensor([input_ids], dtype=tf.int32),\n", - " 'attention_mask': tf.convert_to_tensor([attention_mask], dtype=tf.int32),\n", - " 'token_type_ids': tf.convert_to_tensor([token_type_ids], dtype=tf.int32)\n", - " })\n", - "\n", - "result = {\n", - " 'prediction': str(labels[predictions[0].argmax().item()]),\n", - " 'probability': str(predictions[0].max())\n", - " }\n", - "\n", - "print(result)" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "As you can see based on the sample sentence the model can predict the probability of the StackOverflow tags related to that sentence." - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "## Inferencing with ONNX" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "### ONNX and ONNX Runtime\n", - "**ONNX (Open Neural Network Exchange)** is an interoperable standard format for ML models, with support for both DNN and traditional ML. Models can be converted from a variety of frameworks, such as TensorFlow, Keras, PyTorch, scikit-learn, and more (see [ONNX Conversion tutorials](https://github.com/onnx/tutorials#converting-to-onnx-format)). This provides data teams with the flexibility to use their framework of choice for their training needs, while streamlining the process to operationalize these models for production usage in a consistent way.\n", - "\n", - " In this section, we will demonstrate how to use ONNX Runtime, a high performance inference engine for ONNX format models, for inferencing our model. Along with interoperability, ONNX Runtime's performance-focused architecture can also accelerate inferencing for many models through graph optimizations, utilization of custom accelerators, and more. You can find more about performance tuning [here](https://github.com/microsoft/onnxruntime/blob/master/docs/ONNX_Runtime_Perf_Tuning.md)." - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "#### Download ONNX Model\n", - "To visualize the model, we can use Netron. Click [here](https://lutzroeder.github.io/netron/) to open the browser version and load the model." - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "datastore.download('.',prefix=\"model/bert_tf2.onnx\")" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "#### Install ONNX Runtime" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "%pip install onnxruntime" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "#### Loading ONNX Model\n", - "Load the downloaded ONNX BERT model." - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "import numpy as np\n", - "import onnxruntime as rt\n", - "from transformers import BertTokenizer, TFBertPreTrainedModel, TFBertMainLayer\n", - "max_seq_length = 128\n", - "labels = ['azure-web-app-service', 'azure-storage', 'azure-devops', 'azure-virtual-machine', 'azure-functions']\n", - "tokenizer = BertTokenizer.from_pretrained('bert-base-cased')\n", - "\n", - "sess = rt.InferenceSession(\"./model/bert_tf2.onnx\")\n", - "print(\"ONNX Model loaded from disk.\")" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "#### View the inputs and outputs of converted ONNX model" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "for i in range(len(sess.get_inputs())):\n", - " input_name = sess.get_inputs()[i].name\n", - " print(\"Input name :\", input_name)\n", - " input_shape = sess.get_inputs()[i].shape\n", - " print(\"Input shape :\", input_shape)\n", - " input_type = sess.get_inputs()[i].type\n", - " print(\"Input type :\", input_type)" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "for i in range(len(sess.get_outputs())):\n", - " output_name = sess.get_outputs()[i].name\n", - " print(\"Output name :\", output_name) \n", - " output_shape = sess.get_outputs()[i].shape\n", - " print(\"Output shape :\", output_shape)\n", - " output_type = sess.get_outputs()[i].type\n", - " print(\"Output type :\", output_type)" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "#### Inferencing with ONNX Runtime" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "%%time\n", - "import json \n", - "\n", - "# Input test sentences\n", - "raw_data = json.dumps({\n", - " 'text': 'My VM is not working'\n", - "})\n", - "\n", - "labels = ['azure-web-app-service', 'azure-storage', 'azure-devops', 'azure-virtual-machine', 'azure-functions']\n", - "\n", - "# Encode inputs using tokenizer\n", - "inputs = tokenizer.encode_plus(\n", - " json.loads(raw_data)['text'],\n", - " add_special_tokens=True,\n", - " max_length=max_seq_length\n", - " )\n", - "input_ids, token_type_ids = inputs[\"input_ids\"], inputs[\"token_type_ids\"]\n", - "\n", - " # The mask has 1 for real tokens and 0 for padding tokens. Only real tokens are attended to.\n", - "attention_mask = [1] * len(input_ids)\n", - "\n", - " # Zero-pad up to the sequence length.\n", - "padding_length = max_seq_length - len(input_ids)\n", - "input_ids = input_ids + ([0] * padding_length)\n", - "attention_mask = attention_mask + ([0] * padding_length)\n", - "token_type_ids = token_type_ids + ([0] * padding_length)\n", - " \n", - " # Make prediction\n", - "convert_input = {\n", - " sess.get_inputs()[0].name: np.array(tf.convert_to_tensor([token_type_ids], dtype=tf.int32)),\n", - " sess.get_inputs()[1].name: np.array(tf.convert_to_tensor([input_ids], dtype=tf.int32)),\n", - " sess.get_inputs()[2].name: np.array(tf.convert_to_tensor([attention_mask], dtype=tf.int32))\n", - " }\n", - "\n", - "predictions = sess.run([output_name], convert_input)\n", - "\n", - "result = {\n", - " 'prediction': str(labels[predictions[0].argmax().item()]),\n", - " 'probability': str(predictions[0].max())\n", - " }\n", - "\n", - "print(result)" - ] - }, { "cell_type": "markdown", "metadata": {}, @@ -674,7 +271,7 @@ "from azureml.core import Environment\n", "from azureml.core.conda_dependencies import CondaDependencies \n", "\n", - "myenv = CondaDependencies.create(conda_packages=['numpy','pandas'],\n", + "myenv = CondaDependencies.create(conda_packages=['pip','numpy','pandas'],\n", " pip_packages=['numpy','pandas','inference-schema[numpy-support]','azureml-defaults','tensorflow==2.0.0','transformers==2.0.0','h5py<3.0.0'])\n", "\n", "with open(\"myenv.yml\",\"w\") as f:\n", @@ -770,7 +367,9 @@ "cell_type": "markdown", "metadata": {}, "source": [ - "#### Deploy Local Service" + "#### Deploy Local Service\n", + "\n", + "This may take 7 mins - 10 mins" ] }, { @@ -942,7 +541,8 @@ "\n", " result = {\n", " 'prediction': str(labels[predictions[0].argmax().item()]),\n", - " 'probability': str(predictions[0].max())\n", + " 'probability': str(predictions[0].max()),\n", + " 'message': 'NLP on Azure'\n", " }\n", "\n", " print(result)\n", @@ -1000,6 +600,100 @@ "pp.pprint(local_service.get_logs())" ] }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "#### Using HTTP call\n", + "We will make a Jupyter widget so we can now send construct raw HTTP request and send to the service through the widget." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "import ipywidgets as widgets\n", + "from ipywidgets import Layout, Button, Box, FloatText, Textarea, Dropdown, Label, IntSlider, VBox\n", + "\n", + "from IPython.display import display\n", + "\n", + "\n", + "import requests\n", + "\n", + "text = widgets.Text(\n", + " value='',\n", + " placeholder='Type a query',\n", + " description='Question:',\n", + " disabled=False\n", + ")\n", + "\n", + "button = widgets.Button(description=\"Get Tag!\")\n", + "output = widgets.Output()\n", + "\n", + "items = [text, button] \n", + "\n", + "box_layout = Layout(display='flex',\n", + " flex_flow='row',\n", + " align_items='stretch',\n", + " width='70%')\n", + "\n", + "box_auto = Box(children=items, layout=box_layout)\n", + "\n", + "\n", + "def on_button_clicked(b):\n", + " with output:\n", + " input_data = '{\\\"text\\\": \\\"'+ text.value +'\\\"}'\n", + " headers = {'Content-Type':'application/json'}\n", + " resp = requests.post(local_service.scoring_uri, input_data, headers=headers)\n", + " \n", + " print(\"=\"*10)\n", + " print(\"Question:\", text.value)\n", + " print(\"POST to url\", local_service.scoring_uri)\n", + " print(\"Prediction:\", resp.text)\n", + " print(\"=\"*10)\n", + "\n", + "button.on_click(on_button_clicked)\n", + "\n", + "#Display the GUI\n", + "VBox([box_auto, output])" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "#### Test Web Service with HTTP call\n", + "\n", + "Doing a raw HTTP request and send to the service through without a widget." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "query = 'My VM is not working'\n", + "input_data = '{\\\"text\\\": \\\"'+ query +'\\\"}'\n", + "headers = {'Content-Type':'application/json'}\n", + "resp = requests.post(local_service.scoring_uri, input_data, headers=headers)\n", + "\n", + "print(\"=\"*10)\n", + "print(\"Question:\", query)\n", + "print(\"POST to url\", local_service.scoring_uri)\n", + "print(\"Prediction:\", resp.text)\n", + "print(\"=\"*10)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "---" + ] + }, { "cell_type": "markdown", "metadata": {}, @@ -1181,7 +875,7 @@ "# prov_config = AksCompute.provisioning_configuration(cluster_purpose = AksCompute.ClusterPurpose.DEV_TEST)\n", "prov_config = AksCompute.provisioning_configuration()\n", "\n", - "aks_name = 'myaks'\n", + "aks_name = 'mtcs-amldev-aks'\n", "# Create the cluster\n", "aks_target = ComputeTarget.create(workspace = ws,\n", " name = aks_name,\n", @@ -1207,13 +901,13 @@ "from azureml.core.webservice import AksWebservice, Webservice\n", "from azureml.core.model import Model\n", "\n", - "aks_target = AksCompute(ws,\"myaks\")\n", + "aks_target = AksCompute(ws,\"mtcs-amldev-aks\")\n", "\n", "## Create a deployment configuration file and specify the number of CPUs and gigabyte of RAM needed for your cluster. \n", "## If you feel you need more later, you would have to recreate the image and redeploy the service.\n", "deployment_config = AksWebservice.deploy_configuration(cpu_cores = 2, memory_gb = 4)\n", "\n", - "aks_service = Model.deploy(ws, \"myservice\", [model], inference_config, deployment_config, aks_target)\n", + "aks_service = Model.deploy(ws, \"azureserviceclassifier-bert\", [model], inference_config, deployment_config, aks_target)\n", "aks_service.wait_for_deployment(show_output = True)\n", "print(aks_service.state)" ] @@ -1241,9 +935,19 @@ }, { "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], + "execution_count": 36, + "metadata": {}, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "{'prediction': 'azure-virtual-machine', 'probability': '0.98652285', 'message': 'NLP on Azure'}\n", + "CPU times: user 24.4 ms, sys: 1.69 ms, total: 26.1 ms\n", + "Wall time: 8.84 s\n" + ] + } + ], "source": [ "%%time\n", "import json\n", @@ -1271,104 +975,6 @@ "print(aks_service.scoring_uri)" ] }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "#### Using HTTP call" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "We will make a Jupyter widget so we can now send construct raw HTTP request and send to the service through the widget." - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "#### Test Web Service with HTTP call" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "import ipywidgets as widgets\n", - "from ipywidgets import Layout, Button, Box, FloatText, Textarea, Dropdown, Label, IntSlider, VBox\n", - "\n", - "from IPython.display import display\n", - "\n", - "\n", - "import requests\n", - "\n", - "text = widgets.Text(\n", - " value='',\n", - " placeholder='Type a query',\n", - " description='Question:',\n", - " disabled=False\n", - ")\n", - "\n", - "button = widgets.Button(description=\"Get Tag!\")\n", - "output = widgets.Output()\n", - "\n", - "items = [text, button] \n", - "\n", - "box_layout = Layout(display='flex',\n", - " flex_flow='row',\n", - " align_items='stretch',\n", - " width='70%')\n", - "\n", - "box_auto = Box(children=items, layout=box_layout)\n", - "\n", - "\n", - "def on_button_clicked(b):\n", - " with output:\n", - " input_data = '{\\\"text\\\": \\\"'+ text.value +'\\\"}'\n", - " headers = {'Content-Type':'application/json'}\n", - " resp = requests.post(local_service.scoring_uri, input_data, headers=headers)\n", - " \n", - " print(\"=\"*10)\n", - " print(\"Question:\", text.value)\n", - " print(\"POST to url\", local_service.scoring_uri)\n", - " print(\"Prediction:\", resp.text)\n", - " print(\"=\"*10)\n", - "\n", - "button.on_click(on_button_clicked)\n", - "\n", - "#Display the GUI\n", - "VBox([box_auto, output])" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "Doing a raw HTTP request and send to the service through without a widget." - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "query = 'My VM is not working'\n", - "input_data = '{\\\"text\\\": \\\"'+ query +'\\\"}'\n", - "headers = {'Content-Type':'application/json'}\n", - "resp = requests.post(local_service.scoring_uri, input_data, headers=headers)\n", - "\n", - "print(\"=\"*10)\n", - "print(\"Question:\", query)\n", - "print(\"POST to url\", local_service.scoring_uri)\n", - "print(\"Prediction:\", resp.text)\n", - "print(\"=\"*10)" - ] - }, { "cell_type": "markdown", "metadata": {}, @@ -1401,9 +1007,18 @@ }, { "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], + "execution_count": 37, + "metadata": {}, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "Model: azure-service-classifier, ID: azure-service-classifier:2\n", + "Webservice: azureserviceclassifier-bert, scoring URI: http://52.156.100.189:80/api/v1/service/azureserviceclassifier-bert/score\n" + ] + } + ], "source": [ "models = ws.models\n", "for name, model in models.items():\n", @@ -1435,6 +1050,9 @@ } ], "metadata": { + "kernel_info": { + "name": "python3-azureml" + }, "kernelspec": { "display_name": "Python 3.6 - AzureML", "language": "python", @@ -1451,6 +1069,9 @@ "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.6.9" + }, + "nteract": { + "version": "nteract-front-end@1.0.0" } }, "nbformat": 4, diff --git a/2-Inferencing/azureserviceclassifier_inferencing.ipynb.amltmp b/2-Inferencing/azureserviceclassifier_inferencing.ipynb.amltmp deleted file mode 100644 index 97d6f9c..0000000 --- a/2-Inferencing/azureserviceclassifier_inferencing.ipynb.amltmp +++ /dev/null @@ -1,1455 +0,0 @@ -{ - "cells": [ - { - "cell_type": "markdown", - "source": [ - "Copyright (c) Microsoft Corporation. All rights reserved.\n", - "\n", - "Licensed under the MIT License." - ], - "metadata": {} - }, - { - "cell_type": "markdown", - "source": [ - "# Inferencing with TensorFlow 2.0 on Azure Machine Learning Service" - ], - "metadata": {} - }, - { - "cell_type": "markdown", - "source": [ - "## Overview of Workshop\n", - "\n", - "This notebook is Part 2 (Inferencing and Deploying a Model) of a four part workshop that demonstrates an end-to-end workflow for implementing a BERT model using Tensorflow 2.0 on Azure Machine Learning Service. The different components of the workshop are as follows:\n", - "\n", - "- Part 1: [Preparing Data and Model Training](https://github.com/microsoft/bert-stack-overflow/blob/master/1-Training/AzureServiceClassifier_Training.ipynb)\n", - "- Part 2: [Inferencing and Deploying a Model](https://github.com/microsoft/bert-stack-overflow/blob/master/2-Inferencing/AzureServiceClassifier_Inferencing.ipynb)\n", - "- Part 3: [Setting Up a Pipeline Using MLOps](https://github.com/microsoft/bert-stack-overflow/tree/master/3-ML-Ops)\n", - "- Part 4: [Explaining Your Model Interpretability](https://github.com/microsoft/bert-stack-overflow/blob/master/4-Interpretibility/IBMEmployeeAttritionClassifier_Interpretability.ipynb)\n", - "\n", - "This workshop shows how to convert a TF 2.0 BERT model and deploy the model as Webservice in step-by-step fashion:\n", - "\n", - " * Initilize your workspace\n", - " * Download a previous saved model (saved on Azure Machine Learning)\n", - " * Test the downloaded model\n", - " * Display scoring script\n", - " * Defining an Azure Environment\n", - " * Deploy Model as Webservice (Local, ACI and AKS)\n", - " * Test Deployment (Azure ML Service Call, Raw HTTP Request)\n", - " * Clean up Webservice" - ], - "metadata": {} - }, - { - "cell_type": "markdown", - "source": [ - "## What is Azure Machine Learning Service?\n", - "Azure Machine Learning service is a cloud service that you can use to develop and deploy machine learning models. Using Azure Machine Learning service, you can track your models as you build, train, deploy, and manage them, all at the broad scale that the cloud provides.\n", - "![](./images/aml-overview.png)\n", - "\n", - "\n", - "#### How can we use Azure Machine Learning SDK for deployment and inferencing of a machine learning models?\n", - "Deployment and inferencing of a machine learning model, is often an cumbersome process. Once you a trained model and a scoring script working on your local machine, you will want to deploy this model as a web service.\n", - "\n", - "To facilitate deployment and inferencing, the Azure Machine Learning Python SDK provides a high-level abstraction for model deployment of a web service running on your [local](https://docs.microsoft.com/en-us/azure/machine-learning/service/how-to-deploy-and-where#local) machine, in Azure Container Instance ([ACI](https://azure.microsoft.com/en-us/services/container-instances/)) or Azure Kubernetes Service ([AKS](https://azure.microsoft.com/en-us/services/kubernetes-service/)), which allows users to easily deploy their models in the Azure ecosystem." - ], - "metadata": {} - }, - { - "cell_type": "markdown", - "source": [ - "## Prerequisites\n", - "* Understand the [architecture and terms](https://docs.microsoft.com/en-us/azure/machine-learning/service/concept-azure-machine-learning-architecture) introduced by Azure Machine Learning\n", - "* If you are using an Azure Machine Learning Notebook VM, you are all set. Otherwise, go through the [configuration notebook](https://docs.microsoft.com/en-us/azure/machine-learning/service/tutorial-1st-experiment-sdk-setup) to:\n", - " * Install the AML SDK\n", - " * Create a workspace and its configuration file (config.json)\n", - "* For local scoring test, you will also need to have Tensorflow and Keras installed in the current Jupyter kernel.\n", - "* Please run through Part 1: [Working With Data and Training](1_AzureServiceClassifier_Training.ipynb) Notebook first to register your model\n", - "* Make sure you enable [Docker for non-root users](https://docs.docker.com/install/linux/linux-postinstall/) (This is needed to run Local Deployment). Run the following commands in your Terminal and go to the your [Jupyter dashboard](/tree) and click `Quit` on the top right corner. After the shutdown, the Notebook will be automatically refereshed with the new permissions.\n", - "```bash\n", - " sudo usermod -a -G docker $USER\n", - " newgrp docker\n", - "```" - ], - "metadata": {} - }, - { - "cell_type": "markdown", - "source": [ - "#### Enable Docker for non-root users" - ], - "metadata": {} - }, - { - "cell_type": "code", - "source": [ - "# !sudo usermod -a -G docker $USER\n", - "# !newgrp docker" - ], - "outputs": [], - "execution_count": null, - "metadata": {} - }, - { - "cell_type": "markdown", - "source": [ - "Check if you have the correct permissions to run Docker. Running the line below should print:\n", - "```\n", - "CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES\n", - "```" - ], - "metadata": {} - }, - { - "cell_type": "code", - "source": [ - "# !docker ps" - ], - "outputs": [], - "execution_count": null, - "metadata": {} - }, - { - "cell_type": "markdown", - "source": [ - ">**Note:** Make you shutdown your Jupyter notebook to enable this access. Go to the your [Jupyter dashboard](/tree) and click `Quit` on the top right corner. After the shutdown, the Notebook will be automatically refereshed with the new permissions." - ], - "metadata": {} - }, - { - "cell_type": "markdown", - "source": [ - "## Azure Service Classification Problem \n", - "One of the key tasks to ensuring long term success of any Azure service is actively responding to related posts in online forums such as Stackoverflow. In order to keep track of these posts, Microsoft relies on the associated tags to direct questions to the appropriate support team. While Stackoverflow has different tags for each Azure service (azure-web-app-service, azure-virtual-machine-service, etc), people often use the generic **azure** tag. This makes it hard for specific teams to track down issues related to their product and as a result, many questions get left unanswered. \n", - "\n", - "**In order to solve this problem, we will be building a model to classify posts on Stackoverflow with the appropriate Azure service tag.**\n", - "\n", - "We will be using a BERT (Bidirectional Encoder Representations from Transformers) model which was published by researchers at Google AI Language. Unlike recent language representation models, BERT is designed to pre-train deep bidirectional representations from unlabeled text by jointly conditioning on both left and right context in all layers. As a result, the pre-trained BERT model can be fine-tuned with just one additional output layer to create state-of-the-art models for a wide range of natural language processing (NLP) tasks without substantial architecture modifications.\n", - "\n", - "For more information about the BERT, please read this [paper](https://arxiv.org/pdf/1810.04805.pdf)" - ], - "metadata": {} - }, - { - "cell_type": "markdown", - "source": [ - "## Checking Azure Machine Learning Python SDK Version\n", - "\n", - "If you are running this on a Notebook VM, the Azure Machine Learning Python SDK is installed by default. If you are running this locally, you can follow these [instructions](https://docs.microsoft.com/en-us/python/api/overview/azure/ml/install?view=azure-ml-py) to install it using pip.\n", - "\n", - "This tutorial requires version 1.0.69 or higher. We can import the Python SDK to ensure it has been properly installed:" - ], - "metadata": {} - }, - { - "cell_type": "code", - "source": [ - "# Check core SDK version number\n", - "import azureml.core\n", - "\n", - "print(\"SDK version:\", azureml.core.VERSION)" - ], - "outputs": [], - "execution_count": null, - "metadata": {} - }, - { - "cell_type": "markdown", - "source": [ - "## Connect To Workspace\n", - "\n", - "Initialize a [Workspace](https://docs.microsoft.com/en-us/azure/machine-learning/service/concept-azure-machine-learning-architecture#workspace) object from the existing workspace you created in the prerequisites step. Workspace.from_config() creates a workspace object from the details stored in config.json." - ], - "metadata": {} - }, - { - "cell_type": "code", - "source": [ - "from azureml.core import Workspace\n", - "\n", - "ws = Workspace.from_config()\n", - "print('Workspace name: ' + ws.name, \n", - " 'Azure region: ' + ws.location, \n", - " 'Subscription id: ' + ws.subscription_id, \n", - " 'Resource group: ' + ws.resource_group, sep = '\\n')" - ], - "outputs": [], - "execution_count": null, - "metadata": {} - }, - { - "cell_type": "markdown", - "source": [ - "## Register Datastore\n", - "A [Datastore](https://docs.microsoft.com/en-us/python/api/azureml-core/azureml.core.datastore.datastore?view=azure-ml-py) is used to store connection information to a central data storage. This allows you to access your storage without having to hard code this (potentially confidential) information into your scripts. \n", - "\n", - "In this tutorial, the model was been previously prepped and uploaded into a central [Blob Storage](https://azure.microsoft.com/en-us/services/storage/blobs/) container. We will register this container into our workspace as a datastore using a [shared access signature (SAS) token](https://docs.microsoft.com/en-us/azure/storage/common/storage-sas-overview). \n", - "\n", - "\n", - "\n", - "We need to define the following parameters to register a datastore:\n", - "\n", - "- `ws`: The workspace object\n", - "- `datastore_name`: The name of the datastore, case insensitive, can only contain alphanumeric characters and _.\n", - "- `container_name`: The name of the azure blob container.\n", - "- `account_name`: The storage account name.\n", - "- `sas_token`: An account SAS token, defaults to None.\n" - ], - "metadata": {} - }, - { - "cell_type": "code", - "source": [ - "from azureml.core.datastore import Datastore\n", - "\n", - "datastore_name = 'tfworld'\n", - "container_name = 'azure-service-classifier'\n", - "account_name = 'johndatasets'\n", - "sas_token = '?sv=2019-02-02&ss=bfqt&srt=sco&sp=rl&se=2021-06-02T03:40:25Z&st=2020-03-09T19:40:25Z&spr=https&sig=bUwK7AJUj2c%2Fr90Qf8O1sojF0w6wRFgL2c9zMVCWNPA%3D'\n", - "\n", - "datastore = Datastore.register_azure_blob_container(workspace=ws, \n", - " datastore_name=datastore_name, \n", - " container_name=container_name,\n", - " account_name=account_name, \n", - " sas_token=sas_token)" - ], - "outputs": [], - "execution_count": null, - "metadata": {} - }, - { - "cell_type": "markdown", - "source": [ - "#### If the datastore has already been registered, then you (and other users in your workspace) can directly run this cell." - ], - "metadata": {} - }, - { - "cell_type": "code", - "source": [ - "datastore = ws.datastores['tfworld']" - ], - "outputs": [], - "execution_count": null, - "metadata": {} - }, - { - "cell_type": "markdown", - "source": [ - "### Download Model from Datastore\n", - "Get the trained model from an Azure Blob container. The model is saved into two files, ``config.json`` and ``model.h5``." - ], - "metadata": {} - }, - { - "cell_type": "code", - "source": [ - "from azureml.core.model import Model\n", - "\n", - "datastore.download('./',prefix=\"model\")" - ], - "outputs": [], - "execution_count": null, - "metadata": {} - }, - { - "cell_type": "markdown", - "source": [ - "### Registering the Model with the Workspace\n", - "Register the model to use in your workspace. " - ], - "metadata": {} - }, - { - "cell_type": "code", - "source": [ - "model = Model.register(model_path = \"./model\",\n", - " model_name = \"azure-service-classifier\", # this is the name the model is registered as\n", - " tags = {'pretrained': \"BERT\"},\n", - " workspace = ws)\n", - "model_dir = './model'" - ], - "outputs": [], - "execution_count": null, - "metadata": {} - }, - { - "cell_type": "markdown", - "source": [ - "### Downloading and Using Registered Models\n", - "> If you already completed Part 1: [Working With Data and Training](1_AzureServiceClassifier_Training.ipynb) Notebook.You can dowload your registered BERT Model and use that instead of the model saved on the blob storage." - ], - "metadata": {} - }, - { - "cell_type": "markdown", - "source": [ - "```python\n", - "model = ws.models['azure-service-classifier']\n", - "model_dir = model.download(target_dir='.', exist_ok=True, exists_ok=None)\n", - "```" - ], - "metadata": {} - }, - { - "cell_type": "markdown", - "source": [ - "## Inferencing on the test set\n", - "Let's check the version of the local Keras. Make sure it matches with the version number printed out in the training script. Otherwise you might not be able to load the model properly." - ], - "metadata": {} - }, - { - "cell_type": "code", - "source": [ - "import keras\n", - "import tensorflow as tf\n", - "\n", - "print(\"Keras version:\", keras.__version__)\n", - "print(\"Tensorflow version:\", tf.__version__)" - ], - "outputs": [], - "execution_count": null, - "metadata": {} - }, - { - "cell_type": "markdown", - "source": [ - "#### Install Transformers Library\n", - "We have trained BERT model using Tensorflow 2.0 and the open source [huggingface/transformers](https://github.com/huggingface/transformers) libary. So before we can load the model we need to make sure we have also installed the Transformers Library." - ], - "metadata": {} - }, - { - "cell_type": "code", - "source": [ - "%pip install transformers" - ], - "outputs": [], - "execution_count": null, - "metadata": {} - }, - { - "cell_type": "markdown", - "source": [ - "#### Load the Tensorflow 2.0 BERT model.\n", - "Load the downloaded Tensorflow 2.0 BERT model" - ], - "metadata": {} - }, - { - "cell_type": "code", - "source": [ - "from transformers import BertTokenizer, TFBertPreTrainedModel, TFBertMainLayer\n", - "from transformers.modeling_tf_utils import get_initializer\n", - "class TFBertForMultiClassification(TFBertPreTrainedModel):\n", - " def __init__(self, config, *inputs, **kwargs):\n", - " super(TFBertForMultiClassification, self).__init__(config, *inputs, **kwargs)\n", - " self.num_labels = config.num_labels\n", - " self.bert = TFBertMainLayer(config, name='bert')\n", - " self.dropout = tf.keras.layers.Dropout(config.hidden_dropout_prob)\n", - " self.classifier = tf.keras.layers.Dense(config.num_labels,\n", - " kernel_initializer=get_initializer(config.initializer_range),\n", - " name='classifier',\n", - " activation='softmax')\n", - " def call(self, inputs, **kwargs):\n", - " outputs = self.bert(inputs, **kwargs)\n", - " pooled_output = outputs[1]\n", - " pooled_output = self.dropout(pooled_output, training=kwargs.get('training', False))\n", - " logits = self.classifier(pooled_output)\n", - " outputs = (logits,) + outputs[2:] # add hidden states and attention if they are here\n", - " return outputs # logits, (hidden_states), (attentions)\n", - " \n", - "max_seq_length = 128\n", - "labels = ['azure-web-app-service', 'azure-storage', 'azure-devops', 'azure-virtual-machine', 'azure-functions']\n", - "loaded_model = TFBertForMultiClassification.from_pretrained(model_dir, num_labels=len(labels))\n", - "tokenizer = BertTokenizer.from_pretrained('bert-base-cased')\n", - "print(\"Model loaded from disk.\")" - ], - "outputs": [], - "execution_count": null, - "metadata": {} - }, - { - "cell_type": "markdown", - "source": [ - "Feed in test sentence to test the BERT model. And time the duration of the prediction." - ], - "metadata": {} - }, - { - "cell_type": "code", - "source": [ - "%%time\n", - "import json \n", - "\n", - "# Input test sentences\n", - "raw_data = json.dumps({\n", - " 'text': 'My VM is not working'\n", - "})\n", - "\n", - "# Encode inputs using tokenizer\n", - "inputs = tokenizer.encode_plus(\n", - " json.loads(raw_data)['text'],\n", - " add_special_tokens=True,\n", - " max_length=max_seq_length\n", - " )\n", - "input_ids, token_type_ids = inputs[\"input_ids\"], inputs[\"token_type_ids\"]\n", - "\n", - "# The mask has 1 for real tokens and 0 for padding tokens. Only real tokens are attended to.\n", - "attention_mask = [1] * len(input_ids)\n", - "\n", - "# Zero-pad up to the sequence length.\n", - "padding_length = max_seq_length - len(input_ids)\n", - "input_ids = input_ids + ([0] * padding_length)\n", - "attention_mask = attention_mask + ([0] * padding_length)\n", - "token_type_ids = token_type_ids + ([0] * padding_length)\n", - " \n", - "# Make prediction\n", - "predictions = loaded_model.predict({\n", - " 'input_ids': tf.convert_to_tensor([input_ids], dtype=tf.int32),\n", - " 'attention_mask': tf.convert_to_tensor([attention_mask], dtype=tf.int32),\n", - " 'token_type_ids': tf.convert_to_tensor([token_type_ids], dtype=tf.int32)\n", - " })\n", - "\n", - "result = {\n", - " 'prediction': str(labels[predictions[0].argmax().item()]),\n", - " 'probability': str(predictions[0].max())\n", - " }\n", - "\n", - "print(result)" - ], - "outputs": [], - "execution_count": null, - "metadata": {} - }, - { - "cell_type": "markdown", - "source": [ - "As you can see based on the sample sentence the model can predict the probability of the StackOverflow tags related to that sentence." - ], - "metadata": {} - }, - { - "cell_type": "markdown", - "source": [ - "## Inferencing with ONNX" - ], - "metadata": {} - }, - { - "cell_type": "markdown", - "source": [ - "### ONNX and ONNX Runtime\n", - "**ONNX (Open Neural Network Exchange)** is an interoperable standard format for ML models, with support for both DNN and traditional ML. Models can be converted from a variety of frameworks, such as TensorFlow, Keras, PyTorch, scikit-learn, and more (see [ONNX Conversion tutorials](https://github.com/onnx/tutorials#converting-to-onnx-format)). This provides data teams with the flexibility to use their framework of choice for their training needs, while streamlining the process to operationalize these models for production usage in a consistent way.\n", - "\n", - " In this section, we will demonstrate how to use ONNX Runtime, a high performance inference engine for ONNX format models, for inferencing our model. Along with interoperability, ONNX Runtime's performance-focused architecture can also accelerate inferencing for many models through graph optimizations, utilization of custom accelerators, and more. You can find more about performance tuning [here](https://github.com/microsoft/onnxruntime/blob/master/docs/ONNX_Runtime_Perf_Tuning.md)." - ], - "metadata": {} - }, - { - "cell_type": "markdown", - "source": [ - "#### Download ONNX Model\n", - "To visualize the model, we can use Netron. Click [here](https://lutzroeder.github.io/netron/) to open the browser version and load the model." - ], - "metadata": {} - }, - { - "cell_type": "code", - "source": [ - "datastore.download('.',prefix=\"model/bert_tf2.onnx\")" - ], - "outputs": [], - "execution_count": null, - "metadata": {} - }, - { - "cell_type": "markdown", - "source": [ - "#### Install ONNX Runtime" - ], - "metadata": {} - }, - { - "cell_type": "code", - "source": [ - "%pip install onnxruntime" - ], - "outputs": [], - "execution_count": null, - "metadata": {} - }, - { - "cell_type": "markdown", - "source": [ - "#### Loading ONNX Model\n", - "Load the downloaded ONNX BERT model." - ], - "metadata": {} - }, - { - "cell_type": "code", - "source": [ - "import numpy as np\n", - "import onnxruntime as rt\n", - "from transformers import BertTokenizer, TFBertPreTrainedModel, TFBertMainLayer\n", - "max_seq_length = 128\n", - "labels = ['azure-web-app-service', 'azure-storage', 'azure-devops', 'azure-virtual-machine', 'azure-functions']\n", - "tokenizer = BertTokenizer.from_pretrained('bert-base-cased')\n", - "\n", - "sess = rt.InferenceSession(\"./model/bert_tf2.onnx\")\n", - "print(\"ONNX Model loaded from disk.\")" - ], - "outputs": [], - "execution_count": null, - "metadata": {} - }, - { - "cell_type": "markdown", - "source": [ - "#### View the inputs and outputs of converted ONNX model" - ], - "metadata": {} - }, - { - "cell_type": "code", - "source": [ - "for i in range(len(sess.get_inputs())):\n", - " input_name = sess.get_inputs()[i].name\n", - " print(\"Input name :\", input_name)\n", - " input_shape = sess.get_inputs()[i].shape\n", - " print(\"Input shape :\", input_shape)\n", - " input_type = sess.get_inputs()[i].type\n", - " print(\"Input type :\", input_type)" - ], - "outputs": [], - "execution_count": null, - "metadata": {} - }, - { - "cell_type": "code", - "source": [ - "for i in range(len(sess.get_outputs())):\n", - " output_name = sess.get_outputs()[i].name\n", - " print(\"Output name :\", output_name) \n", - " output_shape = sess.get_outputs()[i].shape\n", - " print(\"Output shape :\", output_shape)\n", - " output_type = sess.get_outputs()[i].type\n", - " print(\"Output type :\", output_type)" - ], - "outputs": [], - "execution_count": null, - "metadata": {} - }, - { - "cell_type": "markdown", - "source": [ - "#### Inferencing with ONNX Runtime" - ], - "metadata": {} - }, - { - "cell_type": "code", - "source": [ - "%%time\n", - "import json \n", - "\n", - "# Input test sentences\n", - "raw_data = json.dumps({\n", - " 'text': 'My VM is not working'\n", - "})\n", - "\n", - "labels = ['azure-web-app-service', 'azure-storage', 'azure-devops', 'azure-virtual-machine', 'azure-functions']\n", - "\n", - "# Encode inputs using tokenizer\n", - "inputs = tokenizer.encode_plus(\n", - " json.loads(raw_data)['text'],\n", - " add_special_tokens=True,\n", - " max_length=max_seq_length\n", - " )\n", - "input_ids, token_type_ids = inputs[\"input_ids\"], inputs[\"token_type_ids\"]\n", - "\n", - " # The mask has 1 for real tokens and 0 for padding tokens. Only real tokens are attended to.\n", - "attention_mask = [1] * len(input_ids)\n", - "\n", - " # Zero-pad up to the sequence length.\n", - "padding_length = max_seq_length - len(input_ids)\n", - "input_ids = input_ids + ([0] * padding_length)\n", - "attention_mask = attention_mask + ([0] * padding_length)\n", - "token_type_ids = token_type_ids + ([0] * padding_length)\n", - " \n", - " # Make prediction\n", - "convert_input = {\n", - " sess.get_inputs()[0].name: np.array(tf.convert_to_tensor([token_type_ids], dtype=tf.int32)),\n", - " sess.get_inputs()[1].name: np.array(tf.convert_to_tensor([input_ids], dtype=tf.int32)),\n", - " sess.get_inputs()[2].name: np.array(tf.convert_to_tensor([attention_mask], dtype=tf.int32))\n", - " }\n", - "\n", - "predictions = sess.run([output_name], convert_input)\n", - "\n", - "result = {\n", - " 'prediction': str(labels[predictions[0].argmax().item()]),\n", - " 'probability': str(predictions[0].max())\n", - " }\n", - "\n", - "print(result)" - ], - "outputs": [], - "execution_count": null, - "metadata": {} - }, - { - "cell_type": "markdown", - "source": [ - "## Deploy models on Azure ML\n", - "\n", - "Now we are ready to deploy the model as a web service running on your [local](https://docs.microsoft.com/en-us/azure/machine-learning/service/how-to-deploy-and-where#local) machine, in Azure Container Instance [ACI](https://azure.microsoft.com/en-us/services/container-instances/) or Azure Kubernetes Service [AKS](https://azure.microsoft.com/en-us/services/kubernetes-service/). Azure Machine Learning accomplishes this by constructing a Docker image with the scoring logic and model baked in. \n", - "> **Note:** For this Notebook, we'll use the original model format for deployment, but the ONNX model can be deployed in the same way by using ONNX Runtime in the scoring script.\n", - "\n", - "![](./images/aml-deploy.png)\n", - "\n", - "\n", - "### Deploying a web service\n", - "Once you've tested the model and are satisfied with the results, deploy the model as a web service. For this Notebook, we'll use the original model format for deployment, but note that the ONNX model can be deployed in the same way by using ONNX Runtime in the scoring script.\n", - "\n", - "To build the correct environment, provide the following:\n", - "* A scoring script to show how to use the model\n", - "* An environment file to show what packages need to be installed\n", - "* A configuration file to build the web service\n", - "* The model you trained before\n", - "\n", - "Read more about deployment [here](https://docs.microsoft.com/en-us/azure/machine-learning/service/how-to-deploy-and-where)" - ], - "metadata": {} - }, - { - "cell_type": "markdown", - "source": [ - "### Create score.py" - ], - "metadata": {} - }, - { - "cell_type": "markdown", - "source": [ - "First, we will create a scoring script that will be invoked by the web service call. We have prepared a [score.py script](code/scoring/score.py) in advance that scores your BERT model.\n", - "\n", - "* Note that the scoring script must have two required functions, ``init()`` and ``run(input_data)``.\n", - " * In ``init()`` function, you typically load the model into a global object. This function is executed only once when the Docker container is started.\n", - " * In ``run(input_data)`` function, the model is used to predict a value based on the input data. The input and output to run typically use JSON as serialization and de-serialization format but you are not limited to that." - ], - "metadata": {} - }, - { - "cell_type": "code", - "source": [ - "%pycat score.py" - ], - "outputs": [], - "execution_count": null, - "metadata": {} - }, - { - "cell_type": "markdown", - "source": [ - "### Create Environment" - ], - "metadata": {} - }, - { - "cell_type": "markdown", - "source": [ - "You can create and/or use a Conda environment using the [Conda Dependencies object](https://docs.microsoft.com/en-us/python/api/azureml-core/azureml.core.conda_dependencies.condadependencies?view=azure-ml-py) when deploying a Webservice." - ], - "metadata": {} - }, - { - "cell_type": "code", - "source": [ - "from azureml.core import Environment\n", - "from azureml.core.conda_dependencies import CondaDependencies \n", - "\n", - "myenv = CondaDependencies.create(conda_packages=['numpy','pandas'],\n", - " pip_packages=['numpy','pandas','inference-schema[numpy-support]','azureml-defaults','tensorflow==2.0.0','transformers==2.0.0','h5py<3.0.0'])\n", - "\n", - "with open(\"myenv.yml\",\"w\") as f:\n", - " f.write(myenv.serialize_to_string())" - ], - "outputs": [], - "execution_count": null, - "metadata": {} - }, - { - "cell_type": "markdown", - "source": [ - "Review the content of the `myenv.yml` file." - ], - "metadata": {} - }, - { - "cell_type": "code", - "source": [ - "%pycat myenv.yml" - ], - "outputs": [], - "execution_count": null, - "metadata": {} - }, - { - "cell_type": "markdown", - "source": [ - "## Create Inference Configuration\n", - "\n", - "We need to define the [Inference Configuration](https://docs.microsoft.com/en-us/python/api/azureml-core/azureml.core.model.inferenceconfig?view=azure-ml-py) for the web service. There is support for a source directory, you can upload an entire folder from your local machine as dependencies for the Webservice.\n", - "Note: in that case, your entry_script and conda_file paths are relative paths to the source_directory path.\n", - "\n", - "Sample code for using a source directory:\n", - "\n", - "```python\n", - "inference_config = InferenceConfig(source_directory=\"C:/abc\",\n", - " runtime= \"python\", \n", - " entry_script=\"x/y/score.py\",\n", - " conda_file=\"env/myenv.yml\")\n", - "```\n", - "\n", - " - source_directory = holds source path as string, this entire folder gets added in image so its really easy to access any files within this folder or subfolder\n", - " - runtime = Which runtime to use for the image. Current supported runtimes are 'spark-py' and 'python\n", - " - entry_script = contains logic specific to initializing your model and running predictions\n", - " - conda_file = manages conda and python package dependencies.\n", - " \n", - " \n", - " > **Note:** Deployment uses the inference configuration deployment configuration to deploy the models. The deployment process is similar regardless of the compute target. Deploying to AKS is slightly different because you must provide a reference to the AKS cluster." - ], - "metadata": {} - }, - { - "cell_type": "code", - "source": [ - "from azureml.core.model import InferenceConfig\n", - "\n", - "inference_config = InferenceConfig(source_directory=\"./\",\n", - " runtime= \"python\", \n", - " entry_script=\"score.py\",\n", - " conda_file=\"myenv.yml\"\n", - " )" - ], - "outputs": [], - "execution_count": null, - "metadata": {} - }, - { - "cell_type": "markdown", - "source": [ - "## Deploy as a Local Service\n", - "\n", - "Estimated time to complete: **about 3-7 minutes**\n", - "\n", - "Configure the image and deploy it locally. The following code goes through these steps:\n", - "\n", - "* Build an image on local machine (or VM, if you are using a VM) using:\n", - " * The scoring file (`score.py`)\n", - " * The environment file (`myenv.yml`)\n", - " * The model file \n", - "* Define [Local Deployment Configuration](https://docs.microsoft.com/en-us/python/api/azureml-core/azureml.core.webservice.localwebservice?view=azure-ml-py#deploy-configuration-port-none-)\n", - "* Send the image to local docker instance. \n", - "* Start up a container using the image.\n", - "* Get the web service HTTP endpoint.\n", - "* This has a very quick turnaround time and is great for testing service before it is deployed to production\n", - "\n", - "> **Note:** Make sure you enable [Docker for non-root users](https://docs.docker.com/install/linux/linux-postinstall/) (This is needed to run Local Deployment). Run the following commands in your Terminal and go to the your [Jupyter dashboard](/tree) and click `Quit` on the top right corner. After the shutdown, the Notebook will be automatically refereshed with the new permissions.\n", - "```bash\n", - " sudo usermod -a -G docker $USER\n", - " newgrp docker\n", - "```" - ], - "metadata": {} - }, - { - "cell_type": "markdown", - "source": [ - "#### Deploy Local Service" - ], - "metadata": {} - }, - { - "cell_type": "code", - "source": [ - "from azureml.core.model import InferenceConfig, Model\n", - "from azureml.core.webservice import LocalWebservice\n", - "\n", - "# Create a local deployment for the web service endpoint\n", - "deployment_config = LocalWebservice.deploy_configuration()\n", - "# Deploy the service\n", - "local_service = Model.deploy(\n", - " ws, \"mymodel\", [model], inference_config, deployment_config)\n", - "# Wait for the deployment to complete\n", - "local_service.wait_for_deployment(True)\n", - "# Display the port that the web service is available on\n", - "print(local_service.port)" - ], - "outputs": [], - "execution_count": null, - "metadata": {} - }, - { - "cell_type": "markdown", - "source": [ - "This is the scoring web service endpoint:" - ], - "metadata": {} - }, - { - "cell_type": "code", - "source": [ - "print(local_service.scoring_uri)" - ], - "outputs": [], - "execution_count": null, - "metadata": {} - }, - { - "cell_type": "markdown", - "source": [ - "### Test Local Service" - ], - "metadata": {} - }, - { - "cell_type": "markdown", - "source": [ - "Let's test the deployed model. Pick a random samples about an issue, and send it to the web service. Note here we are using the run API in the SDK to invoke the service. You can also make raw HTTP calls using any HTTP tool such as curl.\n", - "\n", - "After the invocation, we print the returned predictions." - ], - "metadata": {} - }, - { - "cell_type": "code", - "source": [ - "%%time\n", - "import json\n", - "raw_data = json.dumps({\n", - " 'text': 'My VM is not working'\n", - "})\n", - "\n", - "prediction = local_service.run(input_data=raw_data)" - ], - "outputs": [], - "execution_count": null, - "metadata": {} - }, - { - "cell_type": "markdown", - "source": [ - "### Reloading Webservice\n", - "You can update your score.py file and then call reload() to quickly restart the service. This will only reload your execution script and dependency files, it will not rebuild the underlying Docker image. As a result, reload() is fast." - ], - "metadata": {} - }, - { - "cell_type": "code", - "source": [ - "%%writefile score.py\n", - "import os\n", - "import json\n", - "import tensorflow as tf\n", - "from transformers import TFBertPreTrainedModel, TFBertMainLayer, BertTokenizer\n", - "from transformers.modeling_tf_utils import get_initializer\n", - "import logging\n", - "logging.getLogger(\"transformers.tokenization_utils\").setLevel(logging.ERROR)\n", - "\n", - "\n", - "class TFBertForMultiClassification(TFBertPreTrainedModel):\n", - "\n", - " def __init__(self, config, *inputs, **kwargs):\n", - " super(TFBertForMultiClassification, self) \\\n", - " .__init__(config, *inputs, **kwargs)\n", - " self.num_labels = config.num_labels\n", - "\n", - " self.bert = TFBertMainLayer(config, name='bert')\n", - " self.dropout = tf.keras.layers.Dropout(config.hidden_dropout_prob)\n", - " self.classifier = tf.keras.layers.Dense(\n", - " config.num_labels,\n", - " kernel_initializer=get_initializer(config.initializer_range),\n", - " name='classifier',\n", - " activation='softmax')\n", - "\n", - " def call(self, inputs, **kwargs):\n", - " outputs = self.bert(inputs, **kwargs)\n", - "\n", - " pooled_output = outputs[1]\n", - "\n", - " pooled_output = self.dropout(\n", - " pooled_output,\n", - " training=kwargs.get('training', False))\n", - " logits = self.classifier(pooled_output)\n", - "\n", - " # add hidden states and attention if they are here\n", - " outputs = (logits,) + outputs[2:]\n", - "\n", - " return outputs # logits, (hidden_states), (attentions)\n", - "\n", - "\n", - "max_seq_length = 128\n", - "labels = ['azure-web-app-service', 'azure-storage',\n", - " 'azure-devops', 'azure-virtual-machine', 'azure-functions']\n", - "\n", - "\n", - "def init():\n", - " global tokenizer, model\n", - " # os.path.join(os.getenv('AZUREML_MODEL_DIR'), 'azure-service-classifier')\n", - " tokenizer = BertTokenizer.from_pretrained('bert-base-cased')\n", - " model_dir = os.path.join(os.getenv('AZUREML_MODEL_DIR'), 'model')\n", - " model = TFBertForMultiClassification \\\n", - " .from_pretrained(model_dir, num_labels=len(labels))\n", - " print(\"hello from the reloaded script\")\n", - "\n", - "def run(raw_data):\n", - "\n", - " # Encode inputs using tokenizer\n", - " inputs = tokenizer.encode_plus(\n", - " json.loads(raw_data)['text'],\n", - " add_special_tokens=True,\n", - " max_length=max_seq_length\n", - " )\n", - " input_ids, token_type_ids = inputs[\"input_ids\"], inputs[\"token_type_ids\"]\n", - "\n", - " # The mask has 1 for real tokens and 0 for padding tokens.\n", - " # Only real tokens are attended to.\n", - " attention_mask = [1] * len(input_ids)\n", - "\n", - " # Zero-pad up to the sequence length.\n", - " padding_length = max_seq_length - len(input_ids)\n", - " input_ids = input_ids + ([0] * padding_length)\n", - " attention_mask = attention_mask + ([0] * padding_length)\n", - " token_type_ids = token_type_ids + ([0] * padding_length)\n", - "\n", - " # Make prediction\n", - " predictions = model.predict({\n", - " 'input_ids': tf.convert_to_tensor([input_ids], dtype=tf.int32),\n", - " 'attention_mask': tf.convert_to_tensor(\n", - " [attention_mask],\n", - " dtype=tf.int32),\n", - " 'token_type_ids': tf.convert_to_tensor(\n", - " [token_type_ids], \n", - " dtype=tf.int32)\n", - " })\n", - "\n", - " result = {\n", - " 'prediction': str(labels[predictions[0].argmax().item()]),\n", - " 'probability': str(predictions[0].max())\n", - " }\n", - "\n", - " print(result)\n", - " return result\n", - "\n", - "\n", - "init()\n", - "run(json.dumps({\n", - " 'text': 'My VM is not working'\n", - "}))\n" - ], - "outputs": [], - "execution_count": null, - "metadata": {} - }, - { - "cell_type": "code", - "source": [ - "local_service.reload()" - ], - "outputs": [], - "execution_count": null, - "metadata": {} - }, - { - "cell_type": "markdown", - "source": [ - "### Updating Webservice\n", - "If you do need to rebuild the image -- to add a new Conda or pip package, for instance -- you will have to call update(), instead (see below).\n", - "\n", - "```python\n", - "local_service.update(models=[loaded_model], \n", - " image_config=None, \n", - " deployment_config=None, \n", - " wait=False, inference_config=None)\n", - "```" - ], - "metadata": {} - }, - { - "cell_type": "markdown", - "source": [ - "### View service Logs (Debug, when something goes wrong )\n", - ">**Tip: If something goes wrong with the deployment, the first thing to look at is the logs from the service by running the following command:** Run this cell\n", - "\n", - "You should see the phrase **\"hello from the reloaded script\"** in the logs, because we added it to the script when we did a service reload." - ], - "metadata": {} - }, - { - "cell_type": "code", - "source": [ - "import pprint\n", - "pp = pprint.PrettyPrinter(indent=4)\n", - "pp.pprint(local_service.get_logs())" - ], - "outputs": [], - "execution_count": null, - "metadata": {} - }, - { - "cell_type": "markdown", - "source": [ - "## Deploy in ACI\n", - "Estimated time to complete: **about 3-7 minutes**\n", - "\n", - "Configure the image and deploy. The following code goes through these steps:\n", - "\n", - "* Build an image using:\n", - " * The scoring file (`score.py`)\n", - " * The environment file (`myenv.yml`)\n", - " * The model file\n", - "* Define [ACI Deployment Configuration](https://docs.microsoft.com/en-us/python/api/azureml-core/azureml.core.webservice.aciwebservice?view=azure-ml-py#deploy-configuration-cpu-cores-none--memory-gb-none--tags-none--properties-none--description-none--location-none--auth-enabled-none--ssl-enabled-none--enable-app-insights-none--ssl-cert-pem-file-none--ssl-key-pem-file-none--ssl-cname-none-)\n", - "* Send the image to the ACI container.\n", - "* Start up a container in ACI using the image.\n", - "* Get the web service HTTP endpoint." - ], - "metadata": {} - }, - { - "cell_type": "code", - "source": [ - "%%time\n", - "from azureml.core.webservice import Webservice\n", - "from azureml.exceptions import WebserviceException\n", - "from azureml.core.webservice import AciWebservice, Webservice\n", - "\n", - "## Create a deployment configuration file and specify the number of CPUs and gigabyte of RAM needed for your ACI container. \n", - "## If you feel you need more later, you would have to recreate the image and redeploy the service.\n", - "aciconfig = AciWebservice.deploy_configuration(cpu_cores=2, \n", - " memory_gb=4, \n", - " tags={\"model\": \"BERT\", \"method\" : \"tensorflow\"}, \n", - " description='Predict StackoverFlow tags with BERT')\n", - "\n", - "aci_service_name = 'asc-aciservice'\n", - "\n", - "try:\n", - " # if you want to get existing service below is the command\n", - " # since aci name needs to be unique in subscription deleting existing aci if any\n", - " # we use aci_service_name to create azure ac\n", - " aci_service = Webservice(ws, name=aci_service_name)\n", - " if aci_service:\n", - " aci_service.delete()\n", - "except WebserviceException as e:\n", - " print()\n", - "\n", - "aci_service = Model.deploy(ws, aci_service_name, [model], inference_config, aciconfig)\n", - "\n", - "aci_service.wait_for_deployment(True)\n", - "print(aci_service.state)" - ], - "outputs": [], - "execution_count": null, - "metadata": { - "scrolled": true - } - }, - { - "cell_type": "markdown", - "source": [ - "This is the scoring web service endpoint:" - ], - "metadata": {} - }, - { - "cell_type": "code", - "source": [ - "print(aci_service.scoring_uri)" - ], - "outputs": [], - "execution_count": null, - "metadata": {} - }, - { - "cell_type": "markdown", - "source": [ - "### Test the deployed model" - ], - "metadata": {} - }, - { - "cell_type": "markdown", - "source": [ - "Let's test the deployed model. Pick a random samples about an Azure issue, and send it to the web service. Note here we are using the run API in the SDK to invoke the service. You can also make raw HTTP calls using any HTTP tool such as curl.\n", - "\n", - "After the invocation, we print the returned predictions." - ], - "metadata": {} - }, - { - "cell_type": "code", - "source": [ - "%%time\n", - "import json\n", - "raw_data = json.dumps({\n", - " 'text': 'My VM is not working'\n", - "})\n", - "\n", - "prediction = aci_service.run(input_data=raw_data)\n", - "print(prediction)" - ], - "outputs": [], - "execution_count": null, - "metadata": {} - }, - { - "cell_type": "markdown", - "source": [ - "### View service Logs (Debug, when something goes wrong )\n", - ">**Tip: If something goes wrong with the deployment, the first thing to look at is the logs from the service by running the following command:** Run this cell" - ], - "metadata": {} - }, - { - "cell_type": "code", - "source": [ - "import pprint\n", - "pp = pprint.PrettyPrinter(indent=4)\n", - "pp.pprint(aci_service.get_logs())" - ], - "outputs": [], - "execution_count": null, - "metadata": {} - }, - { - "cell_type": "markdown", - "source": [ - "## Deploy in AKS (Single Node)" - ], - "metadata": {} - }, - { - "cell_type": "markdown", - "source": [ - "Estimated time to complete: **about 15-25 minutes**, 10-15 mins for AKS provisioning and 5-10 mins to deploy service\n", - "\n", - "Configure the image and deploy. The following code goes through these steps:\n", - "\n", - "* Provision a Production AKS Cluster\n", - "* Build an image using:\n", - " * The scoring file (`score.py`)\n", - " * The environment file (`myenv.yml`)\n", - " * The model file\n", - "* Define [AKS Provisioning Configuration](https://docs.microsoft.com/en-us/python/api/azureml-core/azureml.core.compute.akscompute?view=azure-ml-py#provisioning-configuration-agent-count-none--vm-size-none--ssl-cname-none--ssl-cert-pem-file-none--ssl-key-pem-file-none--location-none--vnet-resourcegroup-name-none--vnet-name-none--subnet-name-none--service-cidr-none--dns-service-ip-none--docker-bridge-cidr-none--cluster-purpose-none-)\n", - "* Provision an AKS Cluster\n", - "* Define [AKS Deployment Configuration](https://docs.microsoft.com/en-us/python/api/azureml-core/azureml.core.webservice.akswebservice?view=azure-ml-py#deploy-configuration-autoscale-enabled-none--autoscale-min-replicas-none--autoscale-max-replicas-none--autoscale-refresh-seconds-none--autoscale-target-utilization-none--collect-model-data-none--auth-enabled-none--cpu-cores-none--memory-gb-none--enable-app-insights-none--scoring-timeout-ms-none--replica-max-concurrent-requests-none--max-request-wait-time-none--num-replicas-none--primary-key-none--secondary-key-none--tags-none--properties-none--description-none--gpu-cores-none--period-seconds-none--initial-delay-seconds-none--timeout-seconds-none--success-threshold-none--failure-threshold-none--namespace-none--token-auth-enabled-none-)\n", - "* Send the image to the AKS cluster.\n", - "* Start up a container in AKS using the image.\n", - "* Get the web service HTTP endpoint." - ], - "metadata": {} - }, - { - "cell_type": "markdown", - "source": [ - "#### Provisioning Cluster" - ], - "metadata": {} - }, - { - "cell_type": "code", - "source": [ - "from azureml.core.compute import AksCompute, ComputeTarget\n", - "\n", - "# Use the default configuration (you can also provide parameters to customize this).\n", - "# For example, to create a dev/test cluster, use:\n", - "# prov_config = AksCompute.provisioning_configuration(cluster_purpose = AksCompute.ClusterPurpose.DEV_TEST)\n", - "prov_config = AksCompute.provisioning_configuration()\n", - "\n", - "aks_name = 'myaks'\n", - "# Create the cluster\n", - "aks_target = ComputeTarget.create(workspace = ws,\n", - " name = aks_name,\n", - " provisioning_configuration = prov_config)\n", - "\n", - "# Wait for the create process to complete\n", - "aks_target.wait_for_completion(show_output = True)" - ], - "outputs": [], - "execution_count": null, - "metadata": {} - }, - { - "cell_type": "markdown", - "source": [ - "#### Deploying the model" - ], - "metadata": {} - }, - { - "cell_type": "code", - "source": [ - "from azureml.core.webservice import AksWebservice, Webservice\n", - "from azureml.core.model import Model\n", - "\n", - "aks_target = AksCompute(ws,\"myaks\")\n", - "\n", - "## Create a deployment configuration file and specify the number of CPUs and gigabyte of RAM needed for your cluster. \n", - "## If you feel you need more later, you would have to recreate the image and redeploy the service.\n", - "deployment_config = AksWebservice.deploy_configuration(cpu_cores = 2, memory_gb = 4)\n", - "\n", - "aks_service = Model.deploy(ws, \"myservice\", [model], inference_config, deployment_config, aks_target)\n", - "aks_service.wait_for_deployment(show_output = True)\n", - "print(aks_service.state)" - ], - "outputs": [], - "execution_count": null, - "metadata": {} - }, - { - "cell_type": "markdown", - "source": [ - "### Test the deployed model" - ], - "metadata": {} - }, - { - "cell_type": "markdown", - "source": [ - "#### Using the Azure SDK service call" - ], - "metadata": {} - }, - { - "cell_type": "markdown", - "source": [ - "We can use Azure SDK to make a service call with a simple function" - ], - "metadata": {} - }, - { - "cell_type": "code", - "source": [ - "%%time\n", - "import json\n", - "raw_data = json.dumps({\n", - " 'text': 'My VM is not working'\n", - "})\n", - "\n", - "prediction = aks_service.run(input_data=raw_data)\n", - "print(prediction)" - ], - "outputs": [], - "execution_count": null, - "metadata": {} - }, - { - "cell_type": "markdown", - "source": [ - "This is the scoring web service endpoint:" - ], - "metadata": {} - }, - { - "cell_type": "code", - "source": [ - "print(aks_service.scoring_uri)" - ], - "outputs": [], - "execution_count": null, - "metadata": {} - }, - { - "cell_type": "markdown", - "source": [ - "#### Using HTTP call" - ], - "metadata": {} - }, - { - "cell_type": "markdown", - "source": [ - "We will make a Jupyter widget so we can now send construct raw HTTP request and send to the service through the widget." - ], - "metadata": {} - }, - { - "cell_type": "markdown", - "source": [ - "#### Test Web Service with HTTP call" - ], - "metadata": {} - }, - { - "cell_type": "code", - "source": [ - "import ipywidgets as widgets\n", - "from ipywidgets import Layout, Button, Box, FloatText, Textarea, Dropdown, Label, IntSlider, VBox\n", - "\n", - "from IPython.display import display\n", - "\n", - "\n", - "import requests\n", - "\n", - "text = widgets.Text(\n", - " value='',\n", - " placeholder='Type a query',\n", - " description='Question:',\n", - " disabled=False\n", - ")\n", - "\n", - "button = widgets.Button(description=\"Get Tag!\")\n", - "output = widgets.Output()\n", - "\n", - "items = [text, button] \n", - "\n", - "box_layout = Layout(display='flex',\n", - " flex_flow='row',\n", - " align_items='stretch',\n", - " width='70%')\n", - "\n", - "box_auto = Box(children=items, layout=box_layout)\n", - "\n", - "\n", - "def on_button_clicked(b):\n", - " with output:\n", - " input_data = '{\\\"text\\\": \\\"'+ text.value +'\\\"}'\n", - " headers = {'Content-Type':'application/json'}\n", - " resp = requests.post(local_service.scoring_uri, input_data, headers=headers)\n", - " \n", - " print(\"=\"*10)\n", - " print(\"Question:\", text.value)\n", - " print(\"POST to url\", local_service.scoring_uri)\n", - " print(\"Prediction:\", resp.text)\n", - " print(\"=\"*10)\n", - "\n", - "button.on_click(on_button_clicked)\n", - "\n", - "#Display the GUI\n", - "VBox([box_auto, output])" - ], - "outputs": [], - "execution_count": null, - "metadata": {} - }, - { - "cell_type": "markdown", - "source": [ - "Doing a raw HTTP request and send to the service through without a widget." - ], - "metadata": {} - }, - { - "cell_type": "code", - "source": [ - "query = 'My VM is not working'\n", - "input_data = '{\\\"text\\\": \\\"'+ query +'\\\"}'\n", - "headers = {'Content-Type':'application/json'}\n", - "resp = requests.post(local_service.scoring_uri, input_data, headers=headers)\n", - "\n", - "print(\"=\"*10)\n", - "print(\"Question:\", query)\n", - "print(\"POST to url\", local_service.scoring_uri)\n", - "print(\"Prediction:\", resp.text)\n", - "print(\"=\"*10)" - ], - "outputs": [], - "execution_count": null, - "metadata": {} - }, - { - "cell_type": "markdown", - "source": [ - "### View service Logs (Debug, when something goes wrong )\n", - ">**Tip: If something goes wrong with the deployment, the first thing to look at is the logs from the service by running the following command:** Run this cell" - ], - "metadata": {} - }, - { - "cell_type": "code", - "source": [ - "import pprint\n", - "pp = pprint.PrettyPrinter(indent=4)\n", - "pp.pprint(aks_service.get_logs())" - ], - "outputs": [], - "execution_count": null, - "metadata": {} - }, - { - "cell_type": "markdown", - "source": [ - "## Summary of workspace\n", - "Let's look at the workspace after the web service was deployed. You should see\n", - "\n", - "* a registered model named and with the id \n", - "* an AKS and ACI webservice called with some scoring URL" - ], - "metadata": {} - }, - { - "cell_type": "code", - "source": [ - "models = ws.models\n", - "for name, model in models.items():\n", - " print(\"Model: {}, ID: {}\".format(name, model.id))\n", - " \n", - "webservices = ws.webservices\n", - "for name, webservice in webservices.items():\n", - " print(\"Webservice: {}, scoring URI: {}\".format(name, webservice.scoring_uri))" - ], - "outputs": [], - "execution_count": null, - "metadata": {} - }, - { - "cell_type": "markdown", - "source": [ - "## Delete ACI to clean up\n", - "You can delete the ACI deployment with a simple delete API call." - ], - "metadata": {} - }, - { - "cell_type": "code", - "source": [ - "local_service.delete()\n", - "aci_service.delete()\n", - "aks_service.delete()" - ], - "outputs": [], - "execution_count": null, - "metadata": {} - } - ], - "metadata": { - "kernelspec": { - "name": "python3-azureml", - "language": "python", - "display_name": "Python 3.6 - AzureML" - }, - "language_info": { - "name": "python", - "version": "3.6.9", - "mimetype": "text/x-python", - "codemirror_mode": { - "name": "ipython", - "version": 3 - }, - "pygments_lexer": "ipython3", - "nbconvert_exporter": "python", - "file_extension": ".py" - }, - "kernel_info": { - "name": "python3-azureml" - }, - "nteract": { - "version": "nteract-front-end@1.0.0" - } - }, - "nbformat": 4, - "nbformat_minor": 2 -} \ No newline at end of file diff --git a/2-Inferencing/batch/code/.amlignore b/2-Inferencing/batch/code/.amlignore new file mode 100644 index 0000000..0fa594b --- /dev/null +++ b/2-Inferencing/batch/code/.amlignore @@ -0,0 +1,6 @@ +## This file was auto generated by the Azure Machine Learning Studio. Please do not remove. +## Read more about the .amlignore file here: https://docs.microsoft.com/azure/machine-learning/how-to-save-write-experiment-files#storage-limits-of-experiment-snapshots + +.ipynb_aml_checkpoints/ +*.amltmp +*.amltemp \ No newline at end of file diff --git a/2-Inferencing/batch/code/.amlignore.amltmp b/2-Inferencing/batch/code/.amlignore.amltmp new file mode 100644 index 0000000..0fa594b --- /dev/null +++ b/2-Inferencing/batch/code/.amlignore.amltmp @@ -0,0 +1,6 @@ +## This file was auto generated by the Azure Machine Learning Studio. Please do not remove. +## Read more about the .amlignore file here: https://docs.microsoft.com/azure/machine-learning/how-to-save-write-experiment-files#storage-limits-of-experiment-snapshots + +.ipynb_aml_checkpoints/ +*.amltmp +*.amltemp \ No newline at end of file diff --git a/2-Inferencing/myenv.yml b/2-Inferencing/myenv.yml new file mode 100644 index 0000000..6e0d394 --- /dev/null +++ b/2-Inferencing/myenv.yml @@ -0,0 +1,25 @@ +# Conda environment specification. The dependencies defined in this file will +# be automatically provisioned for runs with userManagedDependencies=False. + +# Details about the Conda environment file format: +# https://conda.io/docs/user-guide/tasks/manage-environments.html#create-env-file-manually + +name: project_environment +dependencies: + # The python interpreter version. + # Currently Azure ML only supports 3.5.2 and later. +- python=3.6.2 + +- pip: + - numpy + - pandas + - inference-schema[numpy-support] + - azureml-defaults~=1.27.0 + - tensorflow==2.0.0 + - transformers==2.0.0 + - h5py<3.0.0 +- numpy +- pandas +channels: +- anaconda +- conda-forge diff --git a/2-Inferencing/score.py b/2-Inferencing/score.py index 2d65e3d..fbb7d9c 100644 --- a/2-Inferencing/score.py +++ b/2-Inferencing/score.py @@ -39,10 +39,8 @@ def call(self, inputs, **kwargs): max_seq_length = 128 -labels = [ - 'azure-web-app-service', 'azure-storage', - 'azure-devops', 'azure-virtual-machine', 'azure-functions' -] +labels = ['azure-web-app-service', 'azure-storage', + 'azure-devops', 'azure-virtual-machine', 'azure-functions'] def init(): @@ -52,7 +50,7 @@ def init(): model_dir = os.path.join(os.getenv('AZUREML_MODEL_DIR'), 'model') model = TFBertForMultiClassification \ .from_pretrained(model_dir, num_labels=len(labels)) - + print("hello from the reloaded script") def run(raw_data): @@ -81,14 +79,21 @@ def run(raw_data): [attention_mask], dtype=tf.int32), 'token_type_ids': tf.convert_to_tensor( - [token_type_ids], + [token_type_ids], dtype=tf.int32) }) result = { 'prediction': str(labels[predictions[0].argmax().item()]), - 'probability': str(predictions[0].max()) + 'probability': str(predictions[0].max()), + 'message': 'NLP on Azure' } print(result) return result + + +init() +run(json.dumps({ + 'text': 'My VM is not working' +})) diff --git a/3-ML-Ops/azureserviceclassifier_aml_pipeline.ipynb.amltmp b/3-ML-Ops/azureserviceclassifier_aml_pipeline.ipynb.amltmp deleted file mode 100644 index 1502b36..0000000 --- a/3-ML-Ops/azureserviceclassifier_aml_pipeline.ipynb.amltmp +++ /dev/null @@ -1,343 +0,0 @@ -{ - "cells": [ - { - "cell_type": "markdown", - "source": [ - "# Part 3: Publish Azure Machine Learning Pipeline to Train BERT Model\n", - "\n", - "## Overview of the part 3\n", - "For this exercise, we assume that you have trained and deployed a machine learning model and that you are now ready to manage the end-to-end lifecycle of your model. [MLOps](https://docs.microsoft.com/azure/machine-learning/service/concept-model-management-and-deployment) can help you to automatically deploy your model as a web application while implementing quality benchmarks, strict version control, model monitoring, and providing an audit trail.\n", - "\n", - "The different components of the workshop are as follows:\n", - "\n", - "- Part 1: [Preparing Data and Model Training](https://github.com/microsoft/bert-stack-overflow/blob/master/1-Training/AzureServiceClassifier_Training.ipynb)\n", - "- Part 2: [Inferencing and Deploying a Model](https://github.com/microsoft/bert-stack-overflow/blob/master/2-Inferencing/AzureServiceClassifier_Inferencing.ipynb)\n", - "- Part 3: [Setting Up a Pipeline Using MLOps](https://github.com/microsoft/bert-stack-overflow/tree/master/3-ML-Ops)\n", - "- Part 4: [Explaining Your Model Interpretability](https://github.com/microsoft/bert-stack-overflow/blob/master/4-Interpretibility/IBMEmployeeAttritionClassifier_Interpretability.ipynb)" - ], - "metadata": {}, - "id": "23bd7455" - }, - { - "cell_type": "markdown", - "source": [ - "## Connect to Workspace" - ], - "metadata": {}, - "id": "71a613c1" - }, - { - "cell_type": "code", - "source": [ - "from azureml.core import Workspace\n", - "\n", - "ws = Workspace.from_config()\n", - "print('Workspace name: ' + ws.name, \n", - " 'Azure region: ' + ws.location, \n", - " 'Subscription id: ' + ws.subscription_id, \n", - " 'Resource group: ' + ws.resource_group, sep = '\\n')" - ], - "outputs": [ - { - "output_type": "stream", - "name": "stdout", - "text": [ - "Workspace name: mtcs-stg-azml\n", - "Azure region: westus2\n", - "Subscription id: 256c7222-4083-4ba7-8714-baa0df54bfe6\n", - "Resource group: mtcs-stg-azml-rg\n" - ] - } - ], - "execution_count": 14, - "metadata": {}, - "id": "ec584ac2" - }, - { - "cell_type": "markdown", - "source": [ - "## Compute Target" - ], - "metadata": {}, - "id": "9da13986" - }, - { - "cell_type": "code", - "source": [ - "from azureml.core.compute import ComputeTarget, AmlCompute\n", - "from azureml.core.compute_target import ComputeTargetException\n", - "\n", - "aml_compute_target = \"gpu-nc8-t4\"\n", - "try:\n", - " aml_compute = AmlCompute(ws, aml_compute_target)\n", - " print(\"found existing compute target.\")\n", - "except ComputeTargetException:\n", - " print(\"creating new compute target\")\n", - " \n", - " provisioning_config = AmlCompute.provisioning_configuration(vm_size = \"STANDARD_D2_V2\",\n", - " min_nodes = 0, \n", - " max_nodes = 2) \n", - " aml_compute = ComputeTarget.create(ws, aml_compute_target, provisioning_config)\n", - " aml_compute.wait_for_completion(show_output=True, min_node_count=None, timeout_in_minutes=20)\n", - " \n", - "print(\"Azure Machine Learning Compute attached\")" - ], - "outputs": [ - { - "output_type": "stream", - "name": "stdout", - "text": [ - "found existing compute target.\n", - "Azure Machine Learning Compute attached\n" - ] - } - ], - "execution_count": 15, - "metadata": {}, - "id": "29eb8dab" - }, - { - "cell_type": "markdown", - "source": [ - "## Pipeline-specific SDK imports" - ], - "metadata": {}, - "id": "39ccf75a" - }, - { - "cell_type": "code", - "source": [ - "import os\n", - "import sys\n", - "from azureml.pipeline.core.graph import PipelineParameter\n", - "from azureml.pipeline.steps import PythonScriptStep\n", - "from azureml.pipeline.core import Pipeline\n", - "from azureml.core.runconfig import RunConfiguration, CondaDependencies\n", - "from azureml.core import Dataset, Datastore\n", - "from azureml.train.dnn import TensorFlow" - ], - "outputs": [], - "execution_count": 16, - "metadata": {}, - "id": "1bb16bbe" - }, - { - "cell_type": "markdown", - "source": [ - "## Define Parameters for Pipeline" - ], - "metadata": {}, - "id": "d2d7e922" - }, - { - "cell_type": "code", - "source": [ - "model_name = PipelineParameter(name=\"model_name\", default_value='azure-service-classifier')\n", - "\n", - "max_seq_length = PipelineParameter(name=\"max_seq_length\", default_value=128)\n", - "\n", - "learning_rate = PipelineParameter(name=\"learning_rate\", default_value=3e-5)\n", - "\n", - "num_epochs = PipelineParameter(name=\"num_epochs\", default_value=3)\n", - "\n", - "export_dir = PipelineParameter(name=\"export_dir\", default_value=\"./outputs/exports\")\n", - "\n", - "batch_size = PipelineParameter(name=\"batch_size\", default_value=32)\n", - "\n", - "steps_per_epoch = PipelineParameter(name=\"steps_per_epoch\", default_value=5)\n", - "\n", - "build_id = PipelineParameter(name='build_id', default_value=0)" - ], - "outputs": [], - "execution_count": 24, - "metadata": {}, - "id": "6209d66c" - }, - { - "cell_type": "code", - "source": [ - "from azureml.core import Dataset\n", - "\n", - "# Get a dataset by name\n", - "train_ds = Dataset.get_by_name(workspace=ws, name='Azure Services Dataset')" - ], - "outputs": [], - "execution_count": 25, - "metadata": {}, - "id": "3d1509e5" - }, - { - "cell_type": "markdown", - "source": [ - "## Creating Steps in a Pipeline" - ], - "metadata": {}, - "id": "56566ac6" - }, - { - "cell_type": "code", - "source": [ - "from azureml.core.runconfig import RunConfiguration\n", - "from azureml.core.conda_dependencies import CondaDependencies\n", - "from azureml.core import Environment \n", - "\n", - "aml_run_config = RunConfiguration()\n", - "\n", - "# env = Environment.get(ws, name='AzureML-TensorFlow-2.0-GPU')\n", - "# env.python.conda_dependencies.add_pip_package(\"transformers==2.0.0\")\n", - "# env.python.conda_dependencies.add_pip_package(\"absl-py\")\n", - "# env.python.conda_dependencies.add_pip_package(\"azureml-dataprep\")\n", - "# env.python.conda_dependencies.add_pip_package(\"h5py<3.0.0\")\n", - "# env.python.conda_dependencies.add_pip_package(\"pandas\")\n", - "\n", - "# env.name = \"Bert_training\"\n", - "\n", - "# aml_run_config.environment.python.conda_dependencies = env.python.conda_dependencies\n", - "# aml_run_config.environment.docker.enabled = True\n", - "\n", - "aml_run_config = RunConfiguration(conda_dependencies=CondaDependencies.create(\n", - " conda_packages=['numpy', 'pandas',\n", - " 'scikit-learn', 'keras'],\n", - " pip_packages=['azureml-core==1.25.0', \n", - " 'azureml-defaults==1.25.0',\n", - " 'azureml-telemetry==1.25.0',\n", - " 'azureml-train-restclients-hyperdrive==1.25.0',\n", - " 'azureml-train-core==1.25.0',\n", - " 'azureml-dataprep',\n", - " 'tensorflow-gpu==2.0.0',\n", - " 'transformers==2.0.0',\n", - " \"absl-py\",\n", - " \"azureml-dataprep\",\n", - " 'h5py<3.0.0'])\n", - ")\n", - "\n", - "# aml_run_config .DockerConfiguration.use_docker = True\n", - "\n", - "aml_run_config" - ], - "outputs": [ - { - "output_type": "execute_result", - "execution_count": 26, - "data": { - "text/plain": "{\n \"script\": null,\n \"arguments\": [],\n \"target\": \"local\",\n \"framework\": \"Python\",\n \"communicator\": \"None\",\n \"maxRunDurationSeconds\": null,\n \"nodeCount\": 1,\n \"priority\": null,\n \"environment\": {\n \"name\": null,\n \"version\": null,\n \"environmentVariables\": {\n \"EXAMPLE_ENV_VAR\": \"EXAMPLE_VALUE\"\n },\n \"python\": {\n \"userManagedDependencies\": false,\n \"interpreterPath\": \"python\",\n \"condaDependenciesFile\": null,\n \"baseCondaEnvironment\": null,\n \"condaDependencies\": {\n \"name\": \"project_environment\",\n \"dependencies\": [\n \"python=3.6.2\",\n {\n \"pip\": [\n \"azureml-core~=1.27.0\",\n \"azureml-defaults~=1.27.0\",\n \"azureml-telemetry~=1.27.0\",\n \"azureml-train-restclients-hyperdrive~=1.27.0\",\n \"azureml-train-core~=1.27.0\",\n \"tensorflow-gpu==2.0.0\",\n \"transformers==2.0.0\",\n \"absl-py\",\n \"azureml-dataprep\",\n \"h5py<3.0.0\"\n ]\n },\n \"numpy\",\n \"pandas\",\n \"scikit-learn\",\n \"keras\"\n ],\n \"channels\": [\n \"anaconda\",\n \"conda-forge\"\n ]\n }\n },\n \"docker\": {\n \"enabled\": false,\n \"baseImage\": \"mcr.microsoft.com/azureml/intelmpi2018.3-ubuntu16.04:20210301.v1\",\n \"baseDockerfile\": null,\n \"sharedVolumes\": true,\n \"shmSize\": \"2g\",\n \"arguments\": [],\n \"baseImageRegistry\": {\n \"address\": null,\n \"username\": null,\n \"password\": null,\n \"registryIdentity\": null\n },\n \"platform\": {\n \"os\": \"Linux\",\n \"architecture\": \"amd64\"\n }\n },\n \"spark\": {\n \"repositories\": [],\n \"packages\": [],\n \"precachePackages\": true\n },\n \"databricks\": {\n \"mavenLibraries\": [],\n \"pypiLibraries\": [],\n \"rcranLibraries\": [],\n \"jarLibraries\": [],\n \"eggLibraries\": []\n },\n \"r\": null,\n \"inferencingStackVersion\": null\n },\n \"history\": {\n \"outputCollection\": true,\n \"snapshotProject\": true,\n \"directoriesToWatch\": [\n \"logs\"\n ]\n },\n \"spark\": {\n \"configuration\": {\n \"spark.app.name\": \"Azure ML Experiment\",\n \"spark.yarn.maxAppAttempts\": 1\n }\n },\n \"docker\": {\n \"useDocker\": false,\n \"sharedVolumes\": true,\n \"arguments\": [],\n \"shmSize\": \"2g\"\n },\n \"hdi\": {\n \"yarnDeployMode\": \"cluster\"\n },\n \"tensorflow\": {\n \"workerCount\": 1,\n \"parameterServerCount\": 1\n },\n \"mpi\": {\n \"processCountPerNode\": 1,\n \"nodeCount\": 1\n },\n \"pytorch\": {\n \"communicationBackend\": \"nccl\",\n \"processCount\": null,\n \"nodeCount\": 1\n },\n \"paralleltask\": {\n \"maxRetriesPerWorker\": 0,\n \"workerCountPerNode\": 1,\n \"terminalExitCodes\": null\n },\n \"dataReferences\": {},\n \"data\": {},\n \"outputData\": {},\n \"sourceDirectoryDataStore\": null,\n \"amlcompute\": {\n \"vmSize\": null,\n \"vmPriority\": null,\n \"retainCluster\": false,\n \"name\": null,\n \"clusterMaxNodeCount\": null\n },\n \"credentialPassthrough\": false,\n \"command\": \"\"\n}" - }, - "metadata": {} - } - ], - "execution_count": 26, - "metadata": {}, - "id": "e423a374" - }, - { - "cell_type": "code", - "source": [ - "source_directory = './scripts'\n", - "\n", - "trainStep = PythonScriptStep(name = 'Train_step',\n", - " script_name = './training/train.py',\n", - " arguments=['--data_dir', train_ds.as_named_input('azureservicedata').as_mount(),\n", - " '--max_seq_length', max_seq_length,\n", - " '--batch_size', batch_size,\n", - " '--learning_rate', learning_rate,\n", - " '--steps_per_epoch', steps_per_epoch,\n", - " '--num_epochs', num_epochs,\n", - " '--export_dir','./outputs/model'],\n", - " compute_target = aml_compute,\n", - " source_directory = source_directory,\n", - " runconfig = aml_run_config,\n", - " allow_reuse=False)\n" - ], - "outputs": [], - "execution_count": 27, - "metadata": {}, - "id": "947a9aaa" - }, - { - "cell_type": "code", - "source": [ - "evalStep = PythonScriptStep(name = 'Eval_step',\n", - " script_name = './evaluate/evaluate_model.py',\n", - " arguments=['--build_id', build_id,\n", - " '--model_name', model_name],\n", - " compute_target = aml_compute,\n", - " source_directory = source_directory,\n", - " runconfig = aml_run_config,\n", - " allow_reuse=False)" - ], - "outputs": [], - "execution_count": 28, - "metadata": {}, - "id": "2f16d44c" - }, - { - "cell_type": "code", - "source": [ - "evalStep.run_after(trainStep)\n", - "steps = [evalStep]" - ], - "outputs": [], - "execution_count": 29, - "metadata": {}, - "id": "e13c800f" - }, - { - "cell_type": "code", - "source": [ - "train_pipeline = Pipeline(workspace=ws, steps=steps)\n", - "train_pipeline.validate()\n", - "published_pipeline = train_pipeline.publish(name='AzureServiceClassifier_BERT_Training')" - ], - "outputs": [ - { - "output_type": "stream", - "name": "stdout", - "text": [ - "Step Eval_step is ready to be created [2f0945aa]\n", - "Step Train_step is ready to be created [73c7cdf6]\n", - "Created step Eval_step [2f0945aa][5fd30863-2cc1-4201-986b-ea5532acb0be], (This step will run and generate new outputs)\n", - "Created step Train_step [73c7cdf6][cef53b35-ab88-4cb3-9079-38422a0c98c4], (This step will run and generate new outputs)\n" - ] - } - ], - "execution_count": 30, - "metadata": {}, - "id": "df254bf7" - }, - { - "cell_type": "code", - "source": [], - "outputs": [], - "execution_count": null, - "metadata": {}, - "id": "29ca3956" - } - ], - "metadata": { - "kernelspec": { - "name": "python38-azureml", - "language": "python", - "display_name": "Python 3.8 - AzureML" - }, - "language_info": { - "name": "python", - "version": "3.8.1", - "mimetype": "text/x-python", - "codemirror_mode": { - "name": "ipython", - "version": 3 - }, - "pygments_lexer": "ipython3", - "nbconvert_exporter": "python", - "file_extension": ".py" - }, - "kernel_info": { - "name": "python38-azureml" - }, - "nteract": { - "version": "nteract-front-end@1.0.0" - } - }, - "nbformat": 4, - "nbformat_minor": 5 -} \ No newline at end of file diff --git a/4-Interpretibility/AzureServiceClassifier_Explain.ipynb b/4-Interpretibility/AzureServiceClassifier_Explain.ipynb index 59a0ec0..ad39099 100644 --- a/4-Interpretibility/AzureServiceClassifier_Explain.ipynb +++ b/4-Interpretibility/AzureServiceClassifier_Explain.ipynb @@ -9,7 +9,31 @@ }, "outputs": [], "source": [ - "%pip install --upgrade shap" + "# %pip install --upgrade shap" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "c37b771d", + "metadata": { + "scrolled": true + }, + "outputs": [], + "source": [ + "%pip install nltk" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "2480498a", + "metadata": { + "scrolled": true + }, + "outputs": [], + "source": [ + "%pip install lime" ] }, { @@ -19,15 +43,26 @@ "metadata": {}, "outputs": [], "source": [ - "%pip install --upgrade transformers" + "# %pip install --upgrade transformers" ] }, { "cell_type": "code", - "execution_count": null, + "execution_count": 1, "id": "bda5eb63", "metadata": {}, - "outputs": [], + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "Workspace name: mtcs-aml-workshop\n", + "Azure region: westus2\n", + "Subscription id: 256c7222-4083-4ba7-8714-baa0df54bfe6\n", + "Resource group: mtcs-aml-workshop-rg\n" + ] + } + ], "source": [ "from azureml.core import Workspace\n", "\n", @@ -40,18 +75,370 @@ }, { "cell_type": "code", - "execution_count": null, + "execution_count": 12, "id": "7da8627d", "metadata": {}, + "outputs": [ + { + "data": { + "text/html": [ + "
\n", + "\n", + "\n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + "
desctag
0How to queue build pipeline as task from relea...azure-devops
1Unable to download the change history and the ...azure-devops
2Is FileUpload functionality for Azure IoT Java...azure-storage
3Error Calling InitializeCache on WindowsAzure ...azure-storage
4AZURE_FUNCTIONS_ENVIRONMENT vs ASPNETCORE_ENVI...azure-functions
5Azure Blob Shared Access Signature without the...azure-storage
6Azure Webapp wheels --find-links does not work...azure-web-app-service
7Access denied on wwwroot after DevOps deployme...azure-devops
8Running an Azure DevOps pipline via CLI or web...azure-devops
9How to dynamically define 'path' in @BlobOutpu...azure-functions
\n", + "
" + ], + "text/plain": [ + " desc tag\n", + "0 How to queue build pipeline as task from relea... azure-devops\n", + "1 Unable to download the change history and the ... azure-devops\n", + "2 Is FileUpload functionality for Azure IoT Java... azure-storage\n", + "3 Error Calling InitializeCache on WindowsAzure ... azure-storage\n", + "4 AZURE_FUNCTIONS_ENVIRONMENT vs ASPNETCORE_ENVI... azure-functions\n", + "5 Azure Blob Shared Access Signature without the... azure-storage\n", + "6 Azure Webapp wheels --find-links does not work... azure-web-app-service\n", + "7 Access denied on wwwroot after DevOps deployme... azure-devops\n", + "8 Running an Azure DevOps pipline via CLI or web... azure-devops\n", + "9 How to dynamically define 'path' in @BlobOutpu... azure-functions" + ] + }, + "execution_count": 12, + "metadata": {}, + "output_type": "execute_result" + } + ], + "source": [ + "# from azureml.core import Dataset\n", + "\n", + "# # Get a dataset by name\n", + "# # train_ds = Dataset.get_by_name(workspace=ws, name='Stackoverflow dataset')\n", + "# # data = train_ds.to_pandas_dataframe()\n", + "# # data.columns = ['idx', 'description', 'classification']\n", + "# # data.head(3)\n", + "\n", + "# azure_dataset = Dataset.get_by_name(ws, 'Azure Services Dataset')\n", + "# azure_dataset.\n", + "\n", + "import pandas as pd\n", + "\n", + "df = pd.read_csv('./data/train.csv', header=None)\n", + "df.columns = ['idx', 'desc','tag']\n", + "df = df.drop(['idx'], axis=1)\n", + "df.head(10)" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "410978fd", + "metadata": {}, "outputs": [], "source": [ - "from azureml.core import Dataset\n", + "from azureml.core import Datastore\n", + "\n", + "datastore = Datastore.get(ws, 'mtcseattle')\n", + "datastore.download('./',prefix=\"model\")\n" + ] + }, + { + "cell_type": "code", + "execution_count": 3, + "id": "c5eccd22", + "metadata": {}, + "outputs": [ + { + "name": "stderr", + "output_type": "stream", + "text": [ + "Some layers from the model checkpoint at bert-base-cased were not used when initializing TFBertForMultiClassification: ['mlm___cls', 'nsp___cls']\n", + "- This IS expected if you are initializing TFBertForMultiClassification from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model).\n", + "- This IS NOT expected if you are initializing TFBertForMultiClassification from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model).\n", + "Some layers of TFBertForMultiClassification were not initialized from the model checkpoint at bert-base-cased and are newly initialized: ['classifier', 'dropout_37']\n", + "You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference.\n" + ] + } + ], + "source": [ + "from transformers import BertTokenizer, TFBertPreTrainedModel, TFBertMainLayer, BertForSequenceClassification\n", + "from transformers.modeling_tf_utils import get_initializer\n", + "import tensorflow as tf\n", + "import numpy as np\n", + "import pandas as pd\n", + "from nltk.tokenize import TweetTokenizer\n", + "import random\n", + "import logging\n", + "logging.basicConfig(format='%(asctime)s - %(message)s', datefmt='%d-%b-%y %H:%M:%S')\n", + "logging.getLogger().setLevel(logging.INFO)\n", + "\n", + "class TFBertForMultiClassification(TFBertPreTrainedModel):\n", + "\n", + " def __init__(self, config, *inputs, **kwargs):\n", + " super(TFBertForMultiClassification, self) \\\n", + " .__init__(config, *inputs, **kwargs)\n", + " self.num_labels = config.num_labels\n", + "\n", + " self.bert = TFBertMainLayer(config, name='bert')\n", + " self.dropout = tf.keras.layers.Dropout(config.hidden_dropout_prob)\n", + " self.classifier = tf.keras.layers.Dense(\n", + " config.num_labels,\n", + " kernel_initializer=get_initializer(config.initializer_range),\n", + " name='classifier',\n", + " activation='softmax')\n", + "\n", + " def call(self, inputs, **kwargs):\n", + " outputs = self.bert(inputs, **kwargs)\n", + "\n", + " pooled_output = outputs[1]\n", + "\n", + " pooled_output = self.dropout(\n", + " pooled_output,\n", + " training=kwargs.get('training', False))\n", + " logits = self.classifier(pooled_output)\n", + "\n", + " # add hidden states and attention if they are here\n", + " outputs = (logits,) + outputs[2:]\n", + "\n", + " return outputs # logits, (hidden_states), (attentions)\n", + "\n", + " \n", + "# Load pre-trained model tokenizer\n", + "pretrained_model = \"./model/model/tf_model.h5\"\n", + "# model = SCForShap.from_pretrained(pretrained_model)\n", + "labels = ['azure-web-app-service', 'azure-storage', 'azure-devops', 'azure-virtual-machine', 'azure-functions']\n", + "\n", + "# Load model and tokenizer\n", + "tokenizer = BertTokenizer.from_pretrained('bert-base-cased')\n", + "model = TFBertForMultiClassification.from_pretrained('bert-base-cased', num_labels=len(labels))" + ] + }, + { + "cell_type": "code", + "execution_count": 17, + "id": "731bad36", + "metadata": {}, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "0\n", + "desc\n", + "['desc']\n", + "1\n", + "tag\n", + "['tag']\n" + ] + } + ], + "source": [ + "for i, example in enumerate(to_use):\n", + " print(i)\n", + " print(example)\n", + " temp = predictor.split_string(example)\n", + " print(temp)" + ] + }, + { + "cell_type": "code", + "execution_count": 13, + "id": "78658cb9", + "metadata": {}, + "outputs": [ + { + "name": "stderr", + "output_type": "stream", + "text": [ + "26-May-21 09:01:53 - Example 1/2 start\n" + ] + }, + { + "ename": "AttributeError", + "evalue": "'TFBertForMultiClassification' object has no attribute 'to'", + "output_type": "error", + "traceback": [ + "\u001b[0;31m---------------------------------------------------------------------------\u001b[0m", + "\u001b[0;31mAttributeError\u001b[0m Traceback (most recent call last)", + "\u001b[0;32m\u001b[0m in \u001b[0;36m\u001b[0;34m\u001b[0m\n\u001b[1;32m 10\u001b[0m \u001b[0mlogging\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0minfo\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0;34mf\"Example {i+1}/{len(to_use)} start\"\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m 11\u001b[0m \u001b[0mtemp\u001b[0m \u001b[0;34m=\u001b[0m \u001b[0mpredictor\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0msplit_string\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0mexample\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0;32m---> 12\u001b[0;31m \u001b[0mexp\u001b[0m \u001b[0;34m=\u001b[0m \u001b[0mexplainer\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mexplain_instance\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0mtext_instance\u001b[0m\u001b[0;34m=\u001b[0m\u001b[0mexample\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0mclassifier_fn\u001b[0m\u001b[0;34m=\u001b[0m\u001b[0mpredictor\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mpredict\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0mnum_features\u001b[0m\u001b[0;34m=\u001b[0m\u001b[0mlen\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0mtemp\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0m\u001b[1;32m 13\u001b[0m \u001b[0mlogging\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0minfo\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0;34mf\"Example {i + 1}/{len(to_use)} done\"\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m 14\u001b[0m \u001b[0mwords\u001b[0m \u001b[0;34m=\u001b[0m \u001b[0mexp\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mas_list\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n", + "\u001b[0;32m/anaconda/envs/azureml_py38/lib/python3.8/site-packages/lime/lime_text.py\u001b[0m in \u001b[0;36mexplain_instance\u001b[0;34m(self, text_instance, classifier_fn, labels, top_labels, num_features, num_samples, distance_metric, model_regressor)\u001b[0m\n\u001b[1;32m 411\u001b[0m mask_string=self.mask_string))\n\u001b[1;32m 412\u001b[0m \u001b[0mdomain_mapper\u001b[0m \u001b[0;34m=\u001b[0m \u001b[0mTextDomainMapper\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0mindexed_string\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0;32m--> 413\u001b[0;31m data, yss, distances = self.__data_labels_distances(\n\u001b[0m\u001b[1;32m 414\u001b[0m \u001b[0mindexed_string\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0mclassifier_fn\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0mnum_samples\u001b[0m\u001b[0;34m,\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m 415\u001b[0m distance_metric=distance_metric)\n", + "\u001b[0;32m/anaconda/envs/azureml_py38/lib/python3.8/site-packages/lime/lime_text.py\u001b[0m in \u001b[0;36m__data_labels_distances\u001b[0;34m(self, indexed_string, classifier_fn, num_samples, distance_metric)\u001b[0m\n\u001b[1;32m 480\u001b[0m \u001b[0mdata\u001b[0m\u001b[0;34m[\u001b[0m\u001b[0mi\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0minactive\u001b[0m\u001b[0;34m]\u001b[0m \u001b[0;34m=\u001b[0m \u001b[0;36m0\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m 481\u001b[0m \u001b[0minverse_data\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mappend\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0mindexed_string\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0minverse_removing\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0minactive\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0;32m--> 482\u001b[0;31m \u001b[0mlabels\u001b[0m \u001b[0;34m=\u001b[0m \u001b[0mclassifier_fn\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0minverse_data\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0m\u001b[1;32m 483\u001b[0m \u001b[0mdistances\u001b[0m \u001b[0;34m=\u001b[0m \u001b[0mdistance_fn\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0msp\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0msparse\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mcsr_matrix\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0mdata\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m 484\u001b[0m \u001b[0;32mreturn\u001b[0m \u001b[0mdata\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0mlabels\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0mdistances\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n", + "\u001b[0;32m/mnt/batch/tasks/shared/LS_root/mounts/clusters/dev-cpu02/code/Users/hyssh/bert-stack-overflow/4-Interpretibility/explainers/LIME_for_text.py\u001b[0m in \u001b[0;36mpredict\u001b[0;34m(self, data)\u001b[0m\n\u001b[1;32m 16\u001b[0m \u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m 17\u001b[0m \u001b[0;32mdef\u001b[0m \u001b[0mpredict\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0mself\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0mdata\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m:\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0;32m---> 18\u001b[0;31m \u001b[0mself\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mmodel\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mto\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0mself\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mdevice\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0m\u001b[1;32m 19\u001b[0m \u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m 20\u001b[0m \u001b[0mref\u001b[0m \u001b[0;34m=\u001b[0m \u001b[0mself\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mtweet_tokenizer\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mtokenize\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0mdata\u001b[0m\u001b[0;34m[\u001b[0m\u001b[0;36m0\u001b[0m\u001b[0;34m]\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n", + "\u001b[0;31mAttributeError\u001b[0m: 'TFBertForMultiClassification' object has no attribute 'to'" + ] + } + ], + "source": [ + "from lime.lime_text import LimeTextExplainer\n", + "from explainers.LIME_for_text import LIMExplainer\n", + "\n", + "predictor = LIMExplainer(model, tokenizer)\n", + "label_names = ['azure-web-app-service', 'azure-storage', 'azure-devops', 'azure-virtual-machine', 'azure-functions']\n", + "explainer = LimeTextExplainer(class_names=label_names, split_expression=predictor.split_string)\n", + "\n", + "to_use = df[-2:]\n", + "for i, example in enumerate(to_use):\n", + " logging.info(f\"Example {i+1}/{len(to_use)} start\")\n", + " temp = predictor.split_string(example)\n", + " exp = explainer.explain_instance(text_instance=example, classifier_fn=predictor.predict, num_features=len(temp))\n", + " logging.info(f\"Example {i + 1}/{len(to_use)} done\")\n", + " words = exp.as_list()\n", + " #sum_ = 0.6\n", + " #exp.local_exp = {x: [(xx, yy / (sum(hh for _, hh in exp.local_exp[x])/sum_)) for xx, yy in exp.local_exp[x]] for x in exp.local_exp}\n", + " exp.show_in_notebook(text=True, labels=(exp.available_labels()[0],))" + ] + }, + { + "cell_type": "code", + "execution_count": 20, + "id": "f6ea1520", + "metadata": {}, + "outputs": [ + { + "data": { + "text/html": [ + "
" + ], + "text/plain": [ + "" + ] + }, + "metadata": {}, + "output_type": "display_data" + }, + { + "ename": "TypeError", + "evalue": "expected string or buffer", + "output_type": "error", + "traceback": [ + "\u001b[0;31m---------------------------------------------------------------------------\u001b[0m", + "\u001b[0;31mTypeError\u001b[0m Traceback (most recent call last)", + "\u001b[0;32m\u001b[0m in \u001b[0;36m\u001b[0;34m\u001b[0m\n\u001b[1;32m 11\u001b[0m \u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m 12\u001b[0m \u001b[0mpredictor\u001b[0m \u001b[0;34m=\u001b[0m \u001b[0mSHAPexplainer\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0mmodel\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0mtokenizer\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0mwords_dict\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0mwords_dict_reverse\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0;32m---> 13\u001b[0;31m \u001b[0mtrain_dt\u001b[0m \u001b[0;34m=\u001b[0m \u001b[0mnp\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0marray\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0;34m[\u001b[0m\u001b[0mpredictor\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0msplit_string\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0mx\u001b[0m\u001b[0;34m)\u001b[0m \u001b[0;32mfor\u001b[0m \u001b[0mx\u001b[0m \u001b[0;32min\u001b[0m \u001b[0mnp\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0marray\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0mdf\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m]\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0m\u001b[1;32m 14\u001b[0m \u001b[0midx_train_data\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0mmax_seq_len\u001b[0m \u001b[0;34m=\u001b[0m \u001b[0mpredictor\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mdt_to_idx\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0mtrain_dt\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m 15\u001b[0m \u001b[0;34m\u001b[0m\u001b[0m\n", + "\u001b[0;32m\u001b[0m in \u001b[0;36m\u001b[0;34m(.0)\u001b[0m\n\u001b[1;32m 11\u001b[0m \u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m 12\u001b[0m \u001b[0mpredictor\u001b[0m \u001b[0;34m=\u001b[0m \u001b[0mSHAPexplainer\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0mmodel\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0mtokenizer\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0mwords_dict\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0mwords_dict_reverse\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0;32m---> 13\u001b[0;31m \u001b[0mtrain_dt\u001b[0m \u001b[0;34m=\u001b[0m \u001b[0mnp\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0marray\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0;34m[\u001b[0m\u001b[0mpredictor\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0msplit_string\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0mx\u001b[0m\u001b[0;34m)\u001b[0m \u001b[0;32mfor\u001b[0m \u001b[0mx\u001b[0m \u001b[0;32min\u001b[0m \u001b[0mnp\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0marray\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0mdf\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m]\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0m\u001b[1;32m 14\u001b[0m \u001b[0midx_train_data\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0mmax_seq_len\u001b[0m \u001b[0;34m=\u001b[0m \u001b[0mpredictor\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mdt_to_idx\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0mtrain_dt\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m 15\u001b[0m \u001b[0;34m\u001b[0m\u001b[0m\n", + "\u001b[0;32m/mnt/batch/tasks/shared/LS_root/mounts/clusters/dev-cpu02/code/Users/hyssh/bert-stack-overflow/4-Interpretibility/explainers/SHAP_for_text.py\u001b[0m in \u001b[0;36msplit_string\u001b[0;34m(self, string)\u001b[0m\n\u001b[1;32m 58\u001b[0m \u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m 59\u001b[0m \u001b[0;32mdef\u001b[0m \u001b[0msplit_string\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0mself\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0mstring\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m:\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0;32m---> 60\u001b[0;31m \u001b[0mdata_raw\u001b[0m \u001b[0;34m=\u001b[0m \u001b[0mself\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mtweet_tokenizer\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mtokenize\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0mstring\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0m\u001b[1;32m 61\u001b[0m \u001b[0mdata_raw\u001b[0m \u001b[0;34m=\u001b[0m \u001b[0;34m[\u001b[0m\u001b[0mx\u001b[0m \u001b[0;32mfor\u001b[0m \u001b[0mx\u001b[0m \u001b[0;32min\u001b[0m \u001b[0mdata_raw\u001b[0m \u001b[0;32mif\u001b[0m \u001b[0mx\u001b[0m \u001b[0;32mnot\u001b[0m \u001b[0;32min\u001b[0m \u001b[0;34m\".,:;'\"\u001b[0m\u001b[0;34m]\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m 62\u001b[0m \u001b[0;32mreturn\u001b[0m \u001b[0mdata_raw\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n", + "\u001b[0;32m/anaconda/envs/azureml_py38/lib/python3.8/site-packages/nltk/tokenize/casual.py\u001b[0m in \u001b[0;36mtokenize\u001b[0;34m(self, text)\u001b[0m\n\u001b[1;32m 284\u001b[0m \"\"\"\n\u001b[1;32m 285\u001b[0m \u001b[0;31m# Fix HTML character entities:\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0;32m--> 286\u001b[0;31m \u001b[0mtext\u001b[0m \u001b[0;34m=\u001b[0m \u001b[0m_replace_html_entities\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0mtext\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0m\u001b[1;32m 287\u001b[0m \u001b[0;31m# Remove username handles\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m 288\u001b[0m \u001b[0;32mif\u001b[0m \u001b[0mself\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mstrip_handles\u001b[0m\u001b[0;34m:\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n", + "\u001b[0;32m/anaconda/envs/azureml_py38/lib/python3.8/site-packages/nltk/tokenize/casual.py\u001b[0m in \u001b[0;36m_replace_html_entities\u001b[0;34m(text, keep, remove_illegal, encoding)\u001b[0m\n\u001b[1;32m 247\u001b[0m \u001b[0;32mreturn\u001b[0m \u001b[0;34m\"\"\u001b[0m \u001b[0;32mif\u001b[0m \u001b[0mremove_illegal\u001b[0m \u001b[0;32melse\u001b[0m \u001b[0mmatch\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mgroup\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0;36m0\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m 248\u001b[0m \u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0;32m--> 249\u001b[0;31m \u001b[0;32mreturn\u001b[0m \u001b[0mENT_RE\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0msub\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0m_convert_entity\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0m_str_to_unicode\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0mtext\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0mencoding\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0m\u001b[1;32m 250\u001b[0m \u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m 251\u001b[0m \u001b[0;34m\u001b[0m\u001b[0m\n", + "\u001b[0;31mTypeError\u001b[0m: expected string or buffer" + ] + } + ], + "source": [ + "import shap\n", + "from explainers.SHAP_for_text import SHAPexplainer\n", + "logging.getLogger(\"shap\").setLevel(logging.WARNING)\n", + "shap.initjs()\n", + "\n", + "words_dict = {0: None}\n", + "words_dict_reverse = {None: 0}\n", + "for h, hh in enumerate(bag_of_words):\n", + " words_dict[h + 1] = hh\n", + " words_dict_reverse[hh] = h + 1\n", + "\n", + "predictor = SHAPexplainer(model, tokenizer, words_dict, words_dict_reverse)\n", + "train_dt = np.array([predictor.split_string(x) for x in np.array(train_data)])\n", + "idx_train_data, max_seq_len = predictor.dt_to_idx(train_dt)\n", + "\n", + "explainer = shap.KernelExplainer(model=predictor.predict, data=shap.kmeans(idx_train_data, k=50))\n", + "\n", + "texts_ = [predictor.split_string(x) for x in texts]\n", + "idx_texts, _ = predictor.dt_to_idx(texts_, max_seq_len=max_seq_len)\n", + "\n", + "to_use = idx_texts[-1:]\n", + "shap_values = explainer.shap_values(X=to_use, nsamples=64, l1_reg=\"aic\")\n", "\n", - "# Get a dataset by name\n", - "train_ds = Dataset.get_by_name(workspace=ws, name='Stackoverflow dataset')\n", - "data = train_ds.to_pandas_dataframe()\n", - "data.columns = ['idx', 'description', 'classification']\n", - "data.head(3)" + "len_ = len(texts_[-1:][0])\n", + "d = {i: sum(x > 0 for x in shap_values[i][0, :len_]) for i, x in enumerate(shap_values)}\n", + "m = max(d, key=d.get)\n", + "print(\" \".join(texts_[-1:][0]))\n", + "shap.force_plot(explainer.expected_value[m], shap_values[m][0, :len_], texts_[-1:][0])" ] }, { diff --git a/4-Interpretibility/data/.amlignore b/4-Interpretibility/data/.amlignore new file mode 100644 index 0000000..0fa594b --- /dev/null +++ b/4-Interpretibility/data/.amlignore @@ -0,0 +1,6 @@ +## This file was auto generated by the Azure Machine Learning Studio. Please do not remove. +## Read more about the .amlignore file here: https://docs.microsoft.com/azure/machine-learning/how-to-save-write-experiment-files#storage-limits-of-experiment-snapshots + +.ipynb_aml_checkpoints/ +*.amltmp +*.amltemp \ No newline at end of file diff --git a/4-Interpretibility/data/.amlignore.amltmp b/4-Interpretibility/data/.amlignore.amltmp new file mode 100644 index 0000000..0fa594b --- /dev/null +++ b/4-Interpretibility/data/.amlignore.amltmp @@ -0,0 +1,6 @@ +## This file was auto generated by the Azure Machine Learning Studio. Please do not remove. +## Read more about the .amlignore file here: https://docs.microsoft.com/azure/machine-learning/how-to-save-write-experiment-files#storage-limits-of-experiment-snapshots + +.ipynb_aml_checkpoints/ +*.amltmp +*.amltemp \ No newline at end of file diff --git a/4-Interpretibility/data/data.csv b/4-Interpretibility/data/emp_data.csv similarity index 100% rename from 4-Interpretibility/data/data.csv rename to 4-Interpretibility/data/emp_data.csv diff --git a/4-Interpretibility/data/train.csv b/4-Interpretibility/data/train.csv new file mode 100644 index 0000000..d4d44ee --- /dev/null +++ b/4-Interpretibility/data/train.csv @@ -0,0 +1,482 @@ +52821204,"How to queue build pipeline as task from release pipeline?

There is a build pipeline that someone else owns in the project (it runs one shell script task doesn't publish anything). I own a release pipeline and want to run a job that effectively \queues\"" their build pipeline. I cannot add an extension to do this. Regardless of how we got to this point or best practices is there a way to accomplish triggering a build of their build pipeline from a job in the release pipeline in azure devops? Thank you.

""",azure-devops +52818615,"Unable to download the change history and the discussion details of \Scrum tasks\"" in VSTS using odata

I don't exactly know how to fetch the change history and discussion details in VSTS. I have looked into Workitems and Work Item Revisions but didn't get any data related to history or discussion from it.

PFB the format of odata url used -

https://analytics.dev.azure.com/{OrganizationName}/{ProjectName}/_odata/{version}//WorkItemRevisions?   $filter=WorkItemId eq {Id}   &$select=WorkItemId  Title  State  https://analytics.dev.azure.com/{OrganizationName}/{ProjectName}/_odata/{version}//WorkItems?   $filter=WorkItemId eq {Id}   &$select=WorkItemId  Title  State 
""",azure-devops +55840501,Is FileUpload functionality for Azure IoT Java SDK possible on Android?

We've been trying to use the Azure IoT SDK for Java on Android (via Kotlin) to initiate blob file uploads. The process seems to hang after the SAS token is received and the call to the CloudBlockBlob constructor is made.

So I tried calling the constructor directly and discovered a dependency on javax.xml.stream.XMLOutputFactory by virtue of the dependency on the Azure Storage SDK v. 2.2 (suprisingly old!). The javax libraries AFAIK aren't easily incorporated on Android.

There is a separate Android storage SDK (which presumably doesn't have these dependencies) but including that in addition to the IoT SDK understandably results in a ton of Duplicate Class errors.

What's the way out of this? Fork the Azure IoT SDK for Java and replace the storage SDK reference with the Android version?

,azure-storage +29638084,"Error Calling InitializeCache on WindowsAzure Storage Account

I have the following snippet being called on on application start:

var driveCache = RoleEnvironment.GetLocalResource(\imageslive\""); CloudDrive.InitializeCache(driveCache.RootPath  driveCache.MaximumSizeInMegabytes); 

This has been working for year or so. I have just upload a new version of the site and am now getting the following error:

Exception of type 'Microsoft.WindowsAzure.CloudDrive.Interop.InteropCloudDriveException' was thrown. at ThrowIfFailed(UInt32 hr) at Microsoft.WindowsAzure.StorageClient.CloudDrive.InitializeCache(String cachePath  Int32 totalCacheSize)  Unknown Error HRESULT=80070103 at Microsoft.WindowsAzure.StorageClient.CloudDrive.InitializeCache(String cachePath  Int32 totalCacheSize) at Site.Global.Application_Start(Object sender  EventArgs e) 

This works when running from within VS with the emulator so presumably is something about the update.

Does anyone have any pointers about how I might go about getting more information? I cannot see any way of getting more information let alone what the sudden cause of the error is.

""",azure-storage +56484241,"AZURE_FUNCTIONS_ENVIRONMENT vs ASPNETCORE_ENVIRONMENT

In azure functions (v2 c#) there are two environment variables that can be potentially used to identify the name of the current environment.

  • AZURE_FUNCTIONS_ENVIRONMENT
  • ASPNETCORE_ENVIRONMENT

I am planning to use AZURE_FUNCTIONS_ENVIRONMENT and I am wondering if there are reasons to choose one over another?

In terms of behavior of the two this is what I discovered:

  • AZURE_FUNCTIONS_ENVIRONMENT is set to Development locally by the functions host/runtime. It is not automatically set in azure to Production. One can set this in App Settings in azure.
  • ASPNETCORE_ENVIRONMENT is not set by the functions host/runtime either locally or in Azure.

I have also raised a github issue about this a couple of weeks ago but got no response. I am hoping I might get an answer here.

""",azure-functions +15122998,Azure Blob Shared Access Signature without the api

I'm trying to create a REST call to Azure to List Blobs within a container. The container is private so I need to access it through a Shared Access Signature (SAS).

I make that call in a Silverlight application so I cannot use the Client API.

I find a lot of examples with ClientAPI but nothing really clear and obvious for REST.

Anyone has a nice... clean and simple example on how to do that?

Thanks

,azure-storage +39381114,"Azure Webapp wheels --find-links does not work

I have been struggling with --find-links for an entire day and I will be very grateful if sb could help me out here.

I have been developing using python3.4 and one of the new features I added uses Azure Storage( the most recent version) and it requires cryptograph which requires cffi idna etc... However when I try to test it against Azure Webapp the deployment failes saying 'error : unable to find vcvarsall.bat'

With some research I figured putting --find-links wheelhouse at the top of my requirements.txt and have wheels(cffi-1.8.2-cp34-cp34m-win32.whl (md5) and cryptography-1.5-cp34-cp34m-win32.whl (md5)) located at wheelhouse folder in the root should work. This was not helping at all and I was running into same problems.

I tried --no-index and it gives \Could not find any downloads that satisfy the requirement cffi==1.8.2\"". Somebody says if I want to use --no-index then I should have all wheels located in wheelhouse; otherwise i will get that error.

With this I would like to use my wheels for cffi and cryptograph and the rest download from pypi. Anyone have any clue...? HELP!

""",azure-web-app-service +55574037,"Access denied on wwwroot after DevOps deployment

I've deployed a .Net Core web application to Azure App Service using Azure DevOps. Now when I try to create file in 'D:\\home\\site\\wwwroot' using Kudu it says:

409 Conflict: Could not write to local resource 'D:\\home\\site\\wwwroot\\anc' >due to error 'Could not find file 'D:\\home\\site\\wwwroot\\anc'.'.

I've noticed that the persmissions on the 'D:\\home\\site\\wwwroot' directory are different than in a similar web app that I deployed using Publish Profile

Get-Acl result on the problematic app:

PS D:\\home\\site\\wwwroot> Get-Acl \D:\\home\\site\\wwwroot\"" Get-Acl \""D:\\home\\site\\wwwroot\""         Directory: D:\\home\\site      Path    Owner                   Access                                           ----    -----                   ------                                           wwwroot IIS APPPOOL\\luncher-dev NT AUTHORITY\\SYSTEM Allow  FullControl...         

Get-Acl result on other similar app:

PS D:\\home\\site\\wwwroot> Get-Acl \""D:\\home\\site\\wwwroot\"" Get-Acl \""D:\\home\\site\\wwwroot\""         Directory: D:\\home\\site      Path    Owner                  Access                                            ----    -----                  ------                                            wwwroot BUILTIN\\Administrators Everyone Allow  DeleteSubdirectoriesAndFiles ...   

Corresponding Release pipeline from Azure DevOps

\""Dev

How can I make the wwwroot directory writable?

""",azure-devops +53532538,Running an Azure DevOps pipline via CLI or web-hook

Is there a way to run (queue) a specific azure pipeline from the command line or via http web-hook or an API ? I would like to automatically trigger a pipeline without the need to change git or whatever.

,azure-devops +50077386,"How to dynamically define 'path' in @BlobOutput?

I am looking at the following code example at https://github.com/Azure/azure-functions-java-worker

public class MyClass {     @FunctionName(\""copy\"")     @StorageAccount(\""AzureWebJobsStorage\"")     @BlobOutput(name = \""$return\""  path = \""samples-output-java/{name}\"")     public static String copy(@BlobTrigger(name = \""blob\""  path = \""samples-input-java/{name}\"") String content) {         return content;     } } 

In @BlobOutput we are using {name} parameter because it was provided to us in @BlobInput. How can I dynamically generate that name in my function?

I want my blob name to be files/E36567AB1B93F7D9798 where the E36567AB1B93F7D9798 part is a hash generated from blob content. I want to generate it inside the function and return the hash as output. Sort of like GitHub creates unique IDs for files.

""",azure-functions +11096470,"Accessing azure storage services

There are couple of ways to access azure storage services. And I wanted to know from the experts:

  • Which is the recommended way for accessing azure storage services?
  • What are the pros/cons of either? (like performance no of requests…)

Windows Azure Storage Client Library Class Library OR Windows Azure Storage Services REST API

""",azure-storage +55062220,Disable certain W3C logging fields on Azure App Services

When logging web server logs in Azure App Services every field is logged by default and there doesn't seem to be any way to disable specific fields.

Is there a way around this? Or am I missing something?

,azure-web-app-service +41119540,To add/remove Partition key and Row key to use Azure Tables with WebAPI

I am developing an application back-end using Azure and Web API. For this I have created some Azure tables. Below is the sample of my model.

public class Device : TableEntity  {    public Device(string partitionKey  string rowKey)      {         this.PartitionKey = partitionKey;         this.RowKey = rowKey;      }        public Device() { }        public string DeviceName { get; set; }        public string DeviceOS { get; set; }        public string Make { get; set; }  } 

The Partition Key is formed by using Table Name like UD_Device (UD_ being a constant and Device being the table name. The Row key is simply the DeviceName unique for all devices.

Now when I query these tables in my Web API I get a List of entities along with Partition key and Row key as properties in them.

This list I have to give it to the front-end angular application as JSON but the Partition Key and Row Key are not to be sent while doing this.

Same thing when I am making a POST request i.e. when I am getting data from angular front-end and I have to send it to Azure Table then the user does not send the Partition key and Row key. So how could I make a model which caters need to both this requirement?

,azure-storage +11733665,"Load Balancing virtual machines via Service Management API - MS Azure

I found the below article to create a virtual machine and load balance with an existing virtual machine.

https://www.windowsazure.com/en-us/manage/windows/common-tasks/how-to-load-balance-virtual-machines/?_sm_au_=iVVNR02FVsMFjVB3

But how can the same be done via Service Management API.

The related tags i found in the POST request to create a VM are

LoadBalancedEndpointSetName LoadBalancerProbe

Where do I get started ? How do i connect two virtual machine via API ?

Thanks.

""",azure-virtual-machine +56664562,"How to access Azure VM from App Service in virtual network by private DNS name?
  • VM and App Service are located in the same Virtual Network.
  • App Service is added to VM through VNet Integration (preview)
  • VM is autoregistered in Private DNS zone say by name myvm1. And full name myvm1.priv.zone
  • Private DNS zone is linked to Virtual Network.
  • Virtual Network - DNS Servers is set to default.
  • VM and App Service were restarted after configuration.

Problem is I can resolve neither myvm1 nor myvm1.priv.zone from App Service console by nameresolver.exe

UPDATE: Actually the issue is even bigger. App Service is not able to send requests to VMs in Virtual Network by their Private IPs (10.1.x.x) even if everything is allowed on VMs' subnet. If the same requests are sent to VMs' Public IPs there is no problem. \""VNET

""",azure-virtual-machine +46047177,Build and Deploy Azure Functions App from Build Server

I have an Azure Functions App developed in Visual Studio using C# and Microsoft.NET.Sdk.Function.

I need to build and deploy this app from our Jenkins build server. What is the recommended approach? MSBuild? MSDeploy? Azure Functions CLI? FTP? I can't use the Source Control or VSTS deployment.

Some sample scripts would be appreciated!

,azure-functions +42086641,"Azure App-Service Swap \Bounces\"" Between Source and Destination

I'm seeing some interesting behavior on Azure App Service that I'm hoping somebody will be kind enough to comment on.

Reproduction steps (all Azure steps can be done in the portal):

  • Create a new Web App in App Service (Standard pricing level single instance is fine) e.g. mysite
  • Create a new staging slot for that App e.g. mysite-staging
  • Deploy a bare-bones ASP.NET app to mysite with a file /scripts/test.js that has the content //ONE
  • Deploy a bare-bones ASP.NET app to mysite-staging with a file /scripts/test.js that has the content //TWO
  • Swap the deployment slots
  • Immediately after the swap starts navigate to mysite.azurewebsites.net/scripts/test.js and monitor the returned content during the swap operation (by continually doing a force-refresh in the browser)

What I would expect to see:

  • At some point during the swap the content changes seamlessly/consistently/irreversibly from //ONE to //TWO

What I actually see:

  • During the swap operation the content \""flickers\""/\""bounces\"" between //ONE and //TWO. After the swap operation is complete the behavior is stable and //TWO is consistently returned

The observed behavior suggests that there is no single point in time at which all traffic can be said to be going to the new version.

The reason this concerns me is the following scenario:

  • A user requests a page mysite.azurewebsites.net which during this \""bouncing\"" stage responds with the \""v2\"" version of the page with a link to a CDN-hosted script mycdn.com/scripts/test.js?v2 (the ?v2 is a new query string)
  • The browser requests the script from the CDN which in turn requests the script from mysite.azurewebsites.net. This time the \""bouncing\"" causes the response to be the v1 version of the script.
  • Now we have a v1 version of the script cached in the CDN which all users in that region will load with the v2 version of the page

My question: Is this \""bouncing\"" behavior during a swap operation \""by design\""? If so what is the recommended approach for solving the pathological case above?

""",azure-web-app-service +52319281,"Azure Function Custom Class request body - No Parameter-less constructor/Invalid Cast string -> guid

I have an azure function that looks something like:

    [FunctionName(\AddMaterial\"")]     public static async Task<IActionResult> Run([HttpTrigger(AuthorizationLevel.Function  \""post\""  Route = null)]AddMaterialCommand command           ILogger log  [Inject(typeof(IMediator))]IMediator mediator)     {         log.LogInformation(\""AddMaterial Function is processing a request\"");          var events = await mediator.Send(command);         if (events != null)         {             await mediator.Publish(events);             return (ActionResult)new OkObjectResult(events);         }         return new BadRequestObjectResult(new { message = \""Please check that WarehouseId  RollPoNumber  RollNumber  Location and RollWeight are included in request\"" });     } 

This function uses the custom object AddMaterialCommand as the request per the docs.

The custom object class looks something like this:

{     [DataContract]     public class AddMaterialCommand : IRequest<EventList>     {         [DataMember]         public Guid WarehouseId { get; set; }          [DataMember]         public int RollPoNumber { get; set; }         [DataMember]         public DateTime? DateRecieved { get; set; }          public AddMaterialCommand(Guid warehouseId  int rollPoNumber   DateTime dateRecieved)         {             WarehouseId = warehouseId;             RollPoNumber = rollPoNumber;             Location = location;             DateRecieved = dateRecieved;         } } 

When posting to the function it throws this error:

Executed 'AddMaterial' (Failed Id=d7322061-c972-4e93-83cd-4d0313d26e86) [9/12/2018 8:59:46 PM] System.Private.CoreLib: Exception while executing function: AddMaterial. Microsoft.Azure.WebJobs.Host: Exception binding parameter 'command'. System.Private.CoreLib: No parameterless constructor defined for this object.

When I add a parameterless constructor (why do I need to do this?) it then fails with this error:

Executed 'AddMaterial' (Failed Id=973cd363-19d6-49a3-a2eb-759f30c284bb) [9/12/2018 9:01:27 PM] System.Private.CoreLib: Exception while executing function: AddMaterial. Microsoft.Azure.WebJobs.Host: Exception binding parameter 'command'. System.Private.CoreLib: Invalid cast from 'System.String' to 'System.Guid'.

What is going on here?

My best guess is that the body of the request is not getting read and that an empty value is throwing the invalid cast exception. I'm still clueless as to why I need a paramaterless constructor. I didn't have this issue before moving to azure functions when I was using the [FromBody] binding but I don't think I can use that binding with azure functions.

""",azure-functions +20631279,SSRS Reports hosted in Azure Virtual Machine not available outside the VM

I have created an ssrs report inside an Azure Virtual Machine (SQL Server 2012 SP1 on Windows Server 2012). When I try to view the report from the Virtual machine it opens up in the browser with a proper url like

    http://mysamplevm/ReportServer/Pages/ReportViewer.aspx?%2fMySampleReport&rs:Command=Render 

When I try to open the same url from my local machine it says webpage is not available. I have completed the following settings too.

  • Created Inbound & Outbound rules in Virtual Machine Firewall for port numbers 80 and 443.
  • Created end points for the same port numbers in azure management portal.
,azure-virtual-machine +48428031,Weird issue with JWT token with web app in Azure

I have 2 web apps setup in azure both with the same clientids setup with oauth middleware. Both use the same Azure AD with the same user I get a token and on one webapp I get a claimsprincipal but on the other one I get a windowsprincipal. Why could this be?

,azure-web-app-service +39845540,"Could not retrieve the repositories: Visual Studio Team Services and Azure

I'm trying to do Continuous delivery to Azure using Visual Studio Team Services. But when i try to connect Azure my web app to Visual Studio Team Services (Visual Studio Online) after typing the url for Team Services. it does the authorization successfully. but I get the following error.

\""enter

I was looking at this screen for a long time but it doesn't seem to complete. What mistake am I making here?

""",azure-devops +25491683,Lost all linux azure VM changes from 8/16 - 8/22. Possible to get those changes back & What can I do in the future to avoid this?

I'm having a serious concern with my azure VM as the title says I lost all of my change from 8/16 to 8/22. This was noticed on the 23rd I am wondering if due to maintenance all changes were reverted back to 8/16? I need to know if its possible to get the VM back to state it was at the end of the day 8/22.

More importantly - I need to know how to avoid such regressions of the VM in the future.

,azure-virtual-machine +14260654,"Unable to recreate Azure VM deleted after going over free limit

I had two VMs running in azure and went over my free limit for this month. I enabled the ability to charge my account and found they were gone.

The VM disks are still there but the VMs themselves have been made into hosted services. To recreate the VM I deleted the hosted service then went to the create new VM dialog like I've seen others post previously. Under the \create from disk\"" option I do not see either of my OS disks as options to create a VM. Is this the right way to recreate VMs or am I missing something?

Also of note the disks still show up as attached to the deleted VMs in the portal.

""",azure-virtual-machine +55605560,"Azure Blob Storage Java SDK: Why isn't asynchronous working?

I am still spinning up on the Azure Storage Java SDK 10 and its use of reactive programming the paradigm. I wrote the following method to asynchronously download a blob to a byte stream as fast as possible. When I use the synchronous version (below) it works properly. When I comment out the blockingAwait() and uncomment the subscribe the write and the doOnComplete never are executed... Basically the run just falls out of the bottom of the method back to the caller. I am sure that I have made an asynchronous processing mistake and hope that someone can steer me in the correct direction. By the way I was surprised to find that there are very few samples of downloading to a stream rather than to a file... Hopefully this posting will help others.

Thank you for your time and interest in my problem...

override fun downloadBlob(url: String  downloadStream: OutputStream) {      BlockBlobURL(URL(url)  pipeline)             .download(null  null  false  null)             .flatMapCompletable { response ->                 FlowableUtil.collectBytesInBuffer(response.body(null))                         .map {                             Channels.newChannel(downloadStream).write(it)                         }.toCompletable()             }.doOnComplete {                 println(\The blob was downloaded...\"")             }.blockingAwait()             //.subscribe() } 

Here is the code that is calling the above method:

fun getAerialImageBlobStream(aerialImageUrl: String): MapOutputStream {      val aerialImageStream = MapOutputStream()     blobStorage.downloadBlob(aerialImageUrl  aerialImageStream)     return aerialImageStream } 
""",azure-storage +43207962,"Azure Web App - Prevent routing to specific instances

We are hosting an ASP.NET Core application on an Azure App Service (Web Apps).

Our individual instances take some time to \preload\"" the required data needed to process requests. But when scaling out requests will be routed to the instances still being prepared.

How does the App Service load balancer decide when an instance is ready and requests can be routed to it? Is there a way to prevent routing to some specific instance until we deem it ready?

""",azure-web-app-service +50011679,"Was BlobEncryptionPolicy removed for azure storage?

I'm trying to use client side encryption for azure to securely upload files to blob storage in .NET

However it seems that BlobEncryptionPolicy is not available and I have not seen any documentation specifying alternative solutions from microsoft.

Even their documentation still uses BlobEncryptionPolicy:

Client-Side Encryption and Azure Key Vault for Microsoft Azure Storage

Specifically i'm inside of a xamarin project using the latest .net version.

If i create a sample console app I can reference BlobEncryptionPolicy without any issues. However the same nuget package inside a xamarin shared project can not resolve the reference to BlobEncryptionPolicy under the Microsoft.WindowsAzure.Storage.Blob namespace.

Does anyone know what is going on here?

""",azure-storage +49314112,"Add Azure Web App diagnostic log settings to ARM template

I'm looking for the option to enable diagnostic log settings (file level not blob) on the template deployment stage.
I've found the following example on Github however it doesn't work saying \""Microsoft.Web/sites/logs\"" is not a valid option\"".
Below is the part of my template:

{           \""apiVersion\"": \""2015-08-01\""            \""name\"": \""logs\""            \""type\"": \""config\""            \""location\"": \""[resourcegroup().location]\""            \""dependsOn\"": [             \""[resourceId('Microsoft.Web/Sites'  parameters('siteName'))]\""           ]            \""properties\"": {             \""applicationLogs\"": {               \""fileSystem\"": {                 \""level\"": \""Verbose\""               }             }              \""httpLogs\"": {               \""fileSystem\"": {                 \""retentionInMb\"": 100                  \""retentionInDays\"": 90                  \""enabled\"": true               }             }              \""failedRequestsTracing\"": {               \""enabled\"": true             }              \""detailedErrorMessages\"": {               \""enabled\"": true             }           }         }  

Also I've found the following discussion on a similar question but the topic starter stated that this piece of code works correctly in most cases.

""",azure-web-app-service +51308330,"How to use Connection String in Azure functions for EF Core 2.1

When using Entity Framework 6+ I can have a class inherit form DbContext like this

MyContext : DbContext 

Then I could use the code like this

using (var context = new MyContext())         {         ...         } 

As long as I had a configuration file with connection string settings with the same name this would be picked up by Entity Framework. Quite nice for different environments.

Now I am working with Azure Functions running on .NET CORE .NET STANDARD and Entity Framework Core 2.1

But I cant figure out how to achieve the same. Even though there is a dedicated section for ConnectionStrings in Azure Function app and i would expect the local.seeting.json with an input like this

  {       \IsEncrypted\"": false        \""Values\"": {         \""AzureWebJobsStorage\"": \""UseDevelopmentStorage=true\""          \""AzureWebJobsDashboard\"": \""UseDevelopmentStorage=true\""        }        \""ConnectionStrings\"": {         \""MyContext\"": \""Server=(localdb)\\\\mssqllocaldb;Database=MyContext;Trusted_Connection=True;ConnectRetryCount=0\""       }     }  

should do the same. But no.

All the samples I can find is where you have to inject the DbContextOptionsBuilder or a connection string into the constructor.

But since it is Azure Function and DI framework dont play that well with it I rather avoid to pass the connection string or db context all the way down the layers.

In short: Can EF Core not pick up the settings from a file itself?

""",azure-functions +49900848,"Azure Functions \Consumption Plan\"" HIPAA Compliance

Since Azure Functions host are dynamically added and removed based on the number of incoming events under \""Consumption Plan\"" what is the guarantee that Azure transparently encrypts the data in-transit as well as at-rest on the hosts? Are there any documentations which can share some light on how Azure Functions fulfills HIPAA compliance?

""",azure-functions +51558155,".Net Core web api not working after deployed to Azure

I do have a simple .Net Core web api application the one made by Visual Studio when a new project is created. I want to deploy it to an Azure App Service via FTP (part of a TFS 2017 build job) which is successful:

\""enter

However when trying a GET like http://somerandomname.azurewebsites.net/api/values all I get is a 404 with the text

The resource you are looking for has been removed had its name changed or is temporarily unavailable.

From Kudu I get the following error: \""enter

What am I missing?

""",azure-web-app-service +45571362,"Full Public Read Access on Azure Storage Emulator

I am putting together a website which I would like to host static content via Azure Blob. The documentation is very clear on how to set \Public read access for blobs only\"" to a container via this document: https://docs.microsoft.com/en-us/azure/storage/storage-manage-access-to-resources

In my development environment I am using the Azure storage emulator (https://docs.microsoft.com/en-us/azure/storage/storage-use-emulator).

My question is: How can I set the permission of a container in the emulator to \""Public read access for blobs only\""?

""",azure-storage +53257498,"How to get an free Azure account for testing Azure devOps project

I am trying to get a Azure free account but it needs a credit card at sign up. I don't have a credit card to get started. Is there an alternative for subscribing to Azure. I require this to work on Azure DevOps CD stage.

\""Subscription\""

This is required for working on Azure DevOps Labs. Since there is not much practical solution available out there for Azure DevOps these labs provide very good base for learning the DevOps process. These labs require azure subscription which is a road block for even learning the process.

\""azuredevopslabs\""

""",azure-devops +52377689,"Merge multiple web projects into a single output zip package

I'm using Visual Studio Online and working through building a continuous integration setup. The scenario I have requires that multiple web projects are built out to a single Azure App Service deployment. The catch is that \out of the box\"" when you create a new build the Visual Studio Build task appears to create a separate zip file for each project in the solution and then the Azure App Service Deploy task throws an error saying there is more than one file matching the pattern which is *.zip.

What I'd like to do is build all of these projects out to a single location merging the various projects together and then Azure App Service Deploy only has a single zip file to push which I know it can do just fine. My MSBuild arguments for the Visual Studio Build task are /p:DeployOnBuild=true /p:WebPublishMethod=Package /p:PackageAsSingleFile=true /p:SkipInvalidConfigurations=true /p:PackageLocation=\""$(build.artifactstagingdirectory)\\\\\"". I tried removing the PackageAsSingleFile attribute but it still created the zip files.

Is the scenario I want to do possible in VSO perhaps with a different set of command codes?

""",azure-devops +57137369,"How do I transfer artifacts in Azure Pipelines between pipelines in the same azure project?

I am trying to set up Azure Pipelines to have the Idris 1 binary produced for the various platforms here: https://github.com/zenntenn/Idris-dev from head and use it to build Idris 2 head for the various platforms from here: https://github.com/zenntenn/Idris2 .

My problem is I can't figure out how to configure the two pipelines properly to make this work.

I have been trying to follow the documentation here:

https://docs.microsoft.com/en-us/azure/devops/pipelines/artifacts/pipeline-artifacts?view=azure-devops&tabs=yaml

I can't figure out how to get the exact YAML needed to make it work for Idris 2.

Idris 1 pipeline is here: https://dev.azure.com/zentenca/Idris/_build?definitionId=2

Idris 2 pipeline is here: https://dev.azure.com/zentenca/Idris/_build?definitionId=1

This is the relevant section of my current Idris 1 azure-pipelines.yml:

  # Test on Linux   - job: Linux     pool:       vmImage: 'ubuntu-16.04'     steps:     - script: |         echo Collection ID is $(System.CollectionId)         sudo add-apt-repository ppa:hvr/ghc         sudo apt-get update         sudo apt-get install ghc-8.2.2 cabal-install-2.2         sudo update-alternatives --config opt-ghc         sudo update-alternatives --config opt-cabal       displayName: 'Prepare system'     - script: |         export PATH=/opt/ghc/bin:$HOME/.cabal/bin:$PATH         cabal update         CABALFLAGS=\""-fffi -fci\"" make       displayName: 'Build Idris'     - script: |         export PATH=/opt/ghc/bin:$HOME/.cabal/bin:$PATH         make test_c       displayName: 'Run tests'     - publish: $(System.DefaultWorkingDirectory)/       artifact: LinuxHead 

This is what I have currently for Idris 2's azure-pipelines.yml:

# Build Idris 2 from Idris 1.  Idris 1 located here: https://github.com/idris-lang/Idris-dev jobs:   # Linux build using the latest Idris 1   - job: Linux_Latest     pool:       vmImage: 'ubuntu-16.04'     steps:     - task: DownloadPipelineArtifact@2       inputs:         source: 'specific'         artifact: LinuxHead         project: e3cceb10-4a17-48c7-a9b8-72264bd71a81         pipelineid: 2         runVersion: 'latest'     - script: |          echo Works so far     displayName: 'Linux build using the latest Idris 1' 

I am trying to have the build results of Idris 1 show up in a way that I can access them in the Idris 2 pipeline.

The current error is: \""Input string was not in a correct format.\""

If in the Idris 2 azure-pipelines.yml I change pipelineid: to pipeline: I get the error:

\""TF50309: The following account does not have sufficient permissions to complete the operation: Idris Build Service (zentenca). The following permissions are needed to perform this operation: View project-level information.\""

Example build result using pipeline: is here: https://dev.azure.com/zentenca/Idris/_build/results?buildId=35&view=results

""",azure-devops +36089171,Consume soap web service from Azure web app

I'm trying to consume a soap web service from an azure web app.

To do this I need to run a connectivity test through telnet to do this.

How would I go about doing this in Azure?

thanks

Chris

,azure-web-app-service +41982562,"VSTS Task Creation : Required field missing

I am trying to create task in VSTS but I am getting the below error.

TF401320: Rule Error for field Task Type. Error code: Required HasValues LimitedToValues AllowsOldValue InvalidEmpty.

From Exception it is clear that I am missing a required field which is Task Type. Now I am not able to find the field path for Task Type. Can anyone help me with this.

Below is the code I am writing to add a task :

string discipline = \Research Task\"";  if (taskDesc.Key.Contains(\""Configuration\"")) {     discipline = \""Dev Task\""; } if (taskDesc.Key.Contains(\""Validation\"")) {     discipline = \""Quality Task\""; }  var workitemtype = \""Task\""; var document = new JsonPatchDocument(); document.Add(     new JsonPatchOperation()     {         Path = \""/fields/Microsoft.VSTS.Common.Discipline\""          Operation = Microsoft.VisualStudio.Services.WebApi.Patch.Operation.Add          Value = discipline     }); document.Add(     new JsonPatchOperation()     {         Path = \""/fields/System.Title\""          Operation = Microsoft.VisualStudio.Services.WebApi.Patch.Operation.Add          Value = string.Format(\""{0} {1}\""  porIDText  taskDesc.Key)     }); document.Add(new JsonPatchOperation() {     Path = \""/fields/System.AreaPath\""      Operation = Microsoft.VisualStudio.Services.WebApi.Patch.Operation.Add      Value = System.Configuration.ConfigurationManager.AppSettings[\""AreaPath\""] }); document.Add(     new JsonPatchOperation()     {         Path = \""/fields/System.AssignedTo\""          Operation = Microsoft.VisualStudio.Services.WebApi.Patch.Operation.Add          Value = \""<name>\""     }); document.Add(     new JsonPatchOperation()     {         Path = \""/fields/System.Description\""          Operation = Microsoft.VisualStudio.Services.WebApi.Patch.Operation.Add          Value = taskDesc.Value     }); var wi = client.CreateWorkItemAsync( document  teamProjectName  workitemtype).Result; 
""",azure-devops +54237884,Durable Functions - Activities seem to stop

Please could someone confirm my thoughts.

I have an orchestration which is calling the same Activity say 400 times. I'm using a fan-out/fan-in concept.

await Task.WhenAll(collectionOfTasks); 

If those 400 activities take longer than 10 minutes in total to process it seems that it doesn't complete and doesn't pick up/continue again unless another call is made to the orchestration method.

Is this right? Does the Azure Function shut down if nothing is running in the Orchestration for 10 minutes? Doesn't matter if the Activities functions are still running?

,azure-functions +36252723,How do I delete an availability set?

I wanted to try out using Azure availability sets but it turns out I can't use them because my nodes cannot be in the same cloud service. I now have an unused availability set but I don't see it listed anywhere in my Resources in the Portal. Is there anyway to delete an availability set?

,azure-virtual-machine +52235684,How to create VSTS Service Endpoint using Azure Keyvault secrets

Am working on VSTS CI&CD. For that am trying to create “Azure Resource Manager” Service Endpoint as a VSTS Connection. But here I don’t want give SPN credentials i.e. “Client Id and Client Secret” directly for making connection in spite of that I need to pass SPN Credentials which are in Keyvault secrets saved in Azure keyvault. Is this possible that the VSTS Service Endpoint creation using Azure Keyvault Secrets. If possible please suggest me to “How to done it”

,azure-devops +57394529,"No agents are registered or you do not have permission to view the agents

After creating the new custom agent pool if I am view that I am seeing the message as \No agents are registered or you do not have permission to view the agents.\"". Can anyone tell me what kind permission is needed or do I need to any other configuration?

And I have created this pool with my username which has administrator privilege.

""",azure-devops +45058223,"How to monitor Azure Classic VM using REST API or via Java SDK?

HI i want to monitor Azure Classic VM using REST API/Java SDK when i tried it with REST API with the following URL(The below url worked for Azure VM)

https://management.azure.com/subscriptions/<subscription_id>/resourceGroups/Preprod2-Resource-Group/providers/Microsoft.ClassicCompute/virtualMachines/cloudops-testvm1/providers/microsoft.insights/metrics?api-version=2016-09-01 

I'm getting the following error

{ \code\"": \""NotFound\"" \""message\"": \""Resource provider not found: [Microsoft.ClassicCompute]\"" }

Please suggest me if it can be done via REST API or if there is an SDK please suggest me the same.

My requirement is i want to monitor Classic VM and collect Network In Network Out Percentage CPU Disk Read Operations/Sec Disk Write Operations/Sec Disk Write Bytes and Disk Read Bytes for every 5mins

""",azure-virtual-machine +32941879,"multiple public IPs for Azure Virtual machine

I have found it difficult to figure out how to have multiple public IPs for one Azure Virtual Server...

  1. it is possible?
  2. exactly what are the commands to do so?

I've already added what seemed to be virtual ips via this article. which also references this article

But I'm really confused now... This link talks about pricing pricing pricing but nowhere on any page so far have I seen how to actually configure a load balancer.. There's a difference between ReservedIPs and Virtual IPs(VIPs)...

help?

""",azure-virtual-machine +47749537,"Azure REST API metric Collection Data JSON Issue

When I try to collect metrics for VM using AZURE REST API for some m/c's I get metrics where data is in average and for some m/c's I get data in total please help me fix this issue I want to get only average data for last 5mins .

I'm using the following URL

https://management.azure.com/subscriptions/{subscription_ID}/resourceGroups/rg-np-eastjp-vnet-10-198-0-0-24/providers/Microsoft.Compute/virtualMachines/vm-dev-eastjp-waf/providers/microsoft.insights/metrics?api-version=2016-09-01&$filter=%28+name.value+eq+%27Network+In%27+or++name.value+eq+%27Network+Out%27+or++name.value+eq+%27Percentage+CPU%27+or++name.value+eq+%27Disk+Read+Bytes%27+or++name.value+eq+%27Disk+Write+Bytes%27+or++name.value+eq+%27Disk+Read+Operations%2FSec%27+or++name.value+eq+%27Disk+Write+Operations%2FSec%27++%29+and+timeGrain+eq+duration%27PT5M%27+and+startTime+eq+2017-12-08T12%3A58%3A41.729+and+endTime+eq+2017-12-08T13%3A03%3A41.729+

\""Data \""Data

The following is the O/P for total:

{   \""value\"":[     {       \""data\"":[         {           \""timeStamp\"":\""2017-12-11T08:19:00Z\""            \""total\"":1791478.0         }       ]        \""id\"":\""/subscriptions/{subscription_ID}/resourceGroups/MWatchLab-dev-db-mcs-473968/providers/Microsoft.Compute/virtualMachines/dev-db-mcs/providers/Microsoft.Insights/metrics/Network In\""        \""name\"":{         \""value\"":\""Network In\""          \""localizedValue\"":\""Network In\""       }        \""type\"":\""Microsoft.Insights/metrics\""        \""unit\"":\""Bytes\""     }      {       \""data\"":[         {           \""timeStamp\"":\""2017-12-11T08:19:00Z\""            \""total\"":1503183.0         }       ]        \""id\"":\""/subscriptions/{subscription_ID}/resourceGroups/MWatchLab-dev-db-mcs-473968/providers/Microsoft.Compute/virtualMachines/dev-db-mcs/providers/Microsoft.Insights/metrics/Network Out\""        \""name\"":{         \""value\"":\""Network Out\""          \""localizedValue\"":\""Network Out\""       }        \""type\"":\""Microsoft.Insights/metrics\""        \""unit\"":\""Bytes\""     }      {       \""data\"":[         {           \""timeStamp\"":\""2017-12-11T08:19:00Z\""            \""total\"":896.99         }       ]        \""id\"":\""/subscriptions/{subscription_ID}/resourceGroups/MWatchLab-dev-db-mcs-473968/providers/Microsoft.Compute/virtualMachines/dev-db-mcs/providers/Microsoft.Insights/metrics/Percentage CPU\""        \""name\"":{         \""value\"":\""Percentage CPU\""          \""localizedValue\"":\""Percentage CPU\""       }        \""type\"":\""Microsoft.Insights/metrics\""        \""unit\"":\""Percent\""     }      {       \""data\"":[         {           \""timeStamp\"":\""2017-12-11T08:19:00Z\""            \""total\"":0.0         }       ]        \""id\"":\""/subscriptions/{subscription_ID}/resourceGroups/MWatchLab-dev-db-mcs-473968/providers/Microsoft.Compute/virtualMachines/dev-db-mcs/providers/Microsoft.Insights/metrics/Disk Read Bytes\""        \""name\"":{         \""value\"":\""Disk Read Bytes\""          \""localizedValue\"":\""Disk Read Bytes\""       }        \""type\"":\""Microsoft.Insights/metrics\""        \""unit\"":\""Bytes\""     }      {       \""data\"":[         {           \""timeStamp\"":\""2017-12-11T08:19:00Z\""            \""total\"":5022690.87         }       ]        \""id\"":\""/subscriptions/{subscription_ID}/resourceGroups/MWatchLab-dev-db-mcs-473968/providers/Microsoft.Compute/virtualMachines/dev-db-mcs/providers/Microsoft.Insights/metrics/Disk Write Bytes\""        \""name\"":{         \""value\"":\""Disk Write Bytes\""          \""localizedValue\"":\""Disk Write Bytes\""       }        \""type\"":\""Microsoft.Insights/metrics\""        \""unit\"":\""Bytes\""     }      {       \""data\"":[         {           \""timeStamp\"":\""2017-12-11T08:19:00Z\""            \""total\"":0.0         }       ]        \""id\"":\""/subscriptions/{subscription_ID}/resourceGroups/MWatchLab-dev-db-mcs-473968/providers/Microsoft.Compute/virtualMachines/dev-db-mcs/providers/Microsoft.Insights/metrics/Disk Read Operations/Sec\""        \""name\"":{         \""value\"":\""Disk Read Operations/Sec\""          \""localizedValue\"":\""Disk Read Operations/Sec\""       }        \""type\"":\""Microsoft.Insights/metrics\""        \""unit\"":\""CountPerSecond\""     }      {       \""data\"":[         {           \""timeStamp\"":\""2017-12-11T08:19:00Z\""            \""total\"":10.77         }       ]        \""id\"":\""/subscriptions/{subscription_ID}/resourceGroups/MWatchLab-dev-db-mcs-473968/providers/Microsoft.Compute/virtualMachines/dev-db-mcs/providers/Microsoft.Insights/metrics/Disk Write Operations/Sec\""        \""name\"":{         \""value\"":\""Disk Write Operations/Sec\""          \""localizedValue\"":\""Disk Write Operations/Sec\""       }        \""type\"":\""Microsoft.Insights/metrics\""        \""unit\"":\""CountPerSecond\""     }   ] } 
""",azure-virtual-machine +56272369,"How to deploy a ASP.NET Web Application + ASP.NET Web API to a single Web App resource on Azure

I am trying to automate the deployment process of an old project to use octopus to deploy to azure. The solution has two projects one classic ASP.NET Web Application and one Web API these were previously deployed to a single Web App resource. I can package each project separately using octopack but how can I package the two as one and deploy to one Web App resource on Azure using Octopus?

\""enter

""",azure-web-app-service +54637913,"Azure DevOps API- How to create a repository?

I'm developing a internal app that creates the skeleton of a solution according to internal guidelines.

As a improvement I would like to enable the user to automatically have the solution \formalized\"" on our DevOps where he would clone and start coding right away instead of the current download as ZIP.

In order to do that i started looking at the azure devops docs but could not figure out a way to create a repository via API...

How can I do that?

""",azure-devops +57185608,"How to create Azure function with C# using HTTP POST and store incoming data into text file in BLOB storage?

I am new to Azure function. I am trying to create Azure function in portal with Http trigger which gets the data as JSON and POST it as a text file in BLOB storage. I know I am missing something here in the code:

function.json

{   \bindings\"": [     {       \""authLevel\"": \""function\""        \""name\"": \""req\""        \""type\"": \""httpTrigger\""        \""direction\"": \""in\""        \""methods\"": [         \""get\""          \""post\""       ]     }      {       \""name\"": \""$return\""        \""type\"": \""http\""        \""direction\"": \""out\""     }      {       \""type\"": \""blob\""        \""name\"": \""outputBlob\""        \""path\"": \""outcontainer/{rand-guid}\""        \""connection\"": \""AzureWebJobsStorage\""        \""direction\"": \""out\""     }   ] } 

run.csx

#r \""Newtonsoft.Json\"" #r \""Microsoft.WindowsAzure.Storage\"" #r \""Microsoft.Azure.WebJobs.Extensions.Storage\"" using System.Net; using Microsoft.AspNetCore.Mvc; using Microsoft.Extensions.Primitives; using Newtonsoft.Json; using Microsoft.WindowsAzure.Storage.Blob; using Microsoft.Azure.WebJobs.Extensions.Storage;  public static async Task<IActionResult> Run(HttpRequest req  [Blob(\""blobcontainer\""  Connection = \""AzureWebJobsStorage\"")] CloudBlobContainer outputContainer  ILogger log) {     log.LogInformation(\""C# HTTP trigger function processed a request.\"");      await outputContainer.CreateIfNotExistsAsync();      var requestBody = await new StreamReader(req.Body).ReadToEndAsync();     dynamic data = JsonConvert.DeserializeObject(requestBody);     var blobName = Guid.NewGuid().ToString();      var cloudBlockBlob = outputContainer.GetBlockBlobReference(blobName);     await cloudBlockBlob.UploadTextAsync(data);      return new OkObjectResult(blobName); } 

It compiles successfully but getting run time error as below:

No value was provided for parameter 'outputContainer'

""",azure-functions +50629150,"VSTS build on clean due to dotnet version

My VSTS build has started to fail recently when doing a clean and it looks like it is something to do with the dotnet version that it's running. If I do a dotnet --version I get the following output:

2018-05-31T16:40:51.0191791Z 2.1.300-rc1-008673

Why is the build agent running a RC version of the dotnet? How can I fix this to a released version?

Looking around at the scripts for building the images for the agents I came across this change which is suppose to stop installing preview/rc version - https://github.com/Microsoft/vsts-image-generation/commit/e9c0aec89ad797d1985a76ab262349b943b02c34 but the agent that is getting built at the moment for me is has rc version?

Here are the error logs from VSTS when we hit out clean stage that runs dotnet clean

2018-06-01T10:35:05.2388624Z ======================================== 2018-06-01T10:35:05.2389045Z Clean 2018-06-01T10:35:05.2389205Z ======================================== 2018-06-01T10:35:05.2389542Z Executing task: Clean 2018-06-01T10:35:05.2389744Z Microsoft (R) Build Engine version 15.7.177.53362 for .NET Core 2018-06-01T10:35:05.2389940Z Copyright (C) Microsoft Corporation. All rights reserved. 2018-06-01T10:35:05.2390085Z  2018-06-01T10:35:05.2390243Z Build started 6/1/2018 10:35:04 AM. 2018-06-01T10:35:05.4567633Z      1>Project \""D:\\a\\1\\s\\EvilCorp.Shopping.sln\"" on node 1 (Clean target(s)). 2018-06-01T10:35:05.4576926Z      1>ValidateSolutionConfiguration: 2018-06-01T10:35:05.4577117Z          Building solution configuration \""Debug|Any CPU\"". 2018-06-01T10:35:05.4577301Z        ValidateProjects: 2018-06-01T10:35:05.4577538Z          The project \""EvilCorp.Shopping.CloudFormation\"" is not selected for building in solution configuration \""Debug|Any CPU\"". 2018-06-01T10:35:05.6568982Z      1>Project \""D:\\a\\1\\s\\EvilCorp.Shopping.sln\"" (1) is building \""D:\\a\\1\\s\\test\\EvilCorp.Shopping.BitCoinMining.Tests\\EvilCorp.Shopping.BitCoinMining.Tests.csproj\"" (2) on node 1 (Clean target(s)). 2018-06-01T10:35:05.6570131Z      2>_CheckForNETCoreSdkIsPreview: 2018-06-01T10:35:05.6570885Z          You are working with a preview version of the .NET Core SDK. You can define the SDK version via a global.json file in the current project. More at https://go.microsoft.com/fwlink/?linkid=869452 2018-06-01T10:35:05.6571157Z        CoreClean: 2018-06-01T10:35:05.6571366Z          Creating directory \""obj\\Debug\\netcoreapp2.0\\\"". 2018-06-01T10:35:05.6573173Z      2>C:\\Program Files\\dotnet\\sdk\\2.1.300-rc1-008673\\Sdks\\Microsoft.NET.Sdk\\targets\\Microsoft.PackageDependencyResolution.targets(197 5): error : Assets file 'D:\\a\\1\\s\\test\\EvilCorp.Shopping.BitCoinMining.Tests\\obj\\project.assets.json' not found. Run a NuGet package restore to generate this file. [D:\\a\\1\\s\\test\\EvilCorp.Shopping.BitCoinMining.Tests\\EvilCorp.Shopping.BitCoinMining.Tests.csproj] 2018-06-01T10:35:05.6574793Z      2>Done Building Project \""D:\\a\\1\\s\\test\\EvilCorp.Shopping.BitCoinMining.Tests\\EvilCorp.Shopping.BitCoinMining.Tests.csproj\"" (Clean target(s)) -- FAILED. 2018-06-01T10:35:06.3974865Z      1>Project \""D:\\a\\1\\s\\EvilCorp.Shopping.sln\"" (1) is building \""D:\\a\\1\\s\\src\\EvilCorp.Shopping.BitCoinMining\\EvilCorp.Shopping.BitCoinMining.csproj\"" (3) on node 2 (Clean target(s)). 2018-06-01T10:35:06.3976276Z      3>_CheckForNETCoreSdkIsPreview: 2018-06-01T10:35:06.3976868Z          You are working with a preview version of the .NET Core SDK. You can define the SDK version via a global.json file in the current project. More at https://go.microsoft.com/fwlink/?linkid=869452 2018-06-01T10:35:06.4013004Z        CoreClean: 2018-06-01T10:35:06.4013409Z          Creating directory \""obj\\Debug\\netcoreapp2.0\\\"". 2018-06-01T10:35:06.4033872Z      3>C:\\Program Files\\dotnet\\sdk\\2.1.300-rc1-008673\\Sdks\\Microsoft.NET.Sdk\\targets\\Microsoft.PackageDependencyResolution.targets(197 5): error : Assets file 'D:\\a\\1\\s\\src\\EvilCorp.Shopping.BitCoinMining\\obj\\project.assets.json' not found. Run a NuGet package restore to generate this file. [D:\\a\\1\\s\\src\\EvilCorp.Shopping.BitCoinMining\\EvilCorp.Shopping.BitCoinMining.csproj] 2018-06-01T10:35:06.4039865Z      3>Done Building Project \""D:\\a\\1\\s\\src\\EvilCorp.Shopping.BitCoinMining\\EvilCorp.Shopping.BitCoinMining.csproj\"" (Clean target(s)) -- FAILED. 2018-06-01T10:35:06.4237898Z      1>Done Building Project \""D:\\a\\1\\s\\EvilCorp.Shopping.sln\"" (Clean target(s)) -- FAILED. 2018-06-01T10:35:06.4315334Z  2018-06-01T10:35:06.4317290Z Build FAILED. 2018-06-01T10:35:06.4320946Z  2018-06-01T10:35:06.4322792Z        \""D:\\a\\1\\s\\EvilCorp.Shopping.sln\"" (Clean target) (1) -> 2018-06-01T10:35:06.4324451Z        \""D:\\a\\1\\s\\test\\EvilCorp.Shopping.BitCoinMining.Tests\\EvilCorp.Shopping.BitCoinMining.Tests.csproj\"" (Clean target) (2) -> 2018-06-01T10:35:06.4324845Z        (ResolvePackageAssets target) ->  2018-06-01T10:35:06.4325763Z          C:\\Program Files\\dotnet\\sdk\\2.1.300-rc1-008673\\Sdks\\Microsoft.NET.Sdk\\targets\\Microsoft.PackageDependencyResolution.targets(197 5): error : Assets file 'D:\\a\\1\\s\\test\\EvilCorp.Shopping.BitCoinMining.Tests\\obj\\project.assets.json' not found. Run a NuGet package restore to generate this file. [D:\\a\\1\\s\\test\\EvilCorp.Shopping.BitCoinMining.Tests\\EvilCorp.Shopping.BitCoinMining.Tests.csproj] 2018-06-01T10:35:06.4326110Z  2018-06-01T10:35:06.4326235Z  2018-06-01T10:35:06.4326444Z        \""D:\\a\\1\\s\\EvilCorp.Shopping.sln\"" (Clean target) (1) -> 2018-06-01T10:35:06.4326731Z        \""D:\\a\\1\\s\\src\\EvilCorp.Shopping.BitCoinMining\\EvilCorp.Shopping.BitCoinMining.csproj\"" (Clean target) (3) -> 2018-06-01T10:35:06.4327177Z          C:\\Program Files\\dotnet\\sdk\\2.1.300-rc1-008673\\Sdks\\Microsoft.NET.Sdk\\targets\\Microsoft.PackageDependencyResolution.targets(197 5): error : Assets file 'D:\\a\\1\\s\\src\\EvilCorp.Shopping.BitCoinMining\\obj\\project.assets.json' not found. Run a NuGet package restore to generate this file. [D:\\a\\1\\s\\src\\EvilCorp.Shopping.BitCoinMining\\EvilCorp.Shopping.BitCoinMining.csproj] 2018-06-01T10:35:06.4327515Z  2018-06-01T10:35:06.4327682Z     0 Warning(s) 2018-06-01T10:35:06.4327863Z     2 Error(s) 2018-06-01T10:35:06.4327987Z  2018-06-01T10:35:06.4328151Z Time Elapsed 00:00:01.51 
""",azure-devops +46531036,"nuget package restore fails for webapp with custom nuget.config

I am attempting to fix a problem I have with restoring nuget packages for a .net core 2.0 webapi that has a custom package source.

Basically when including the nuget.config any microsoft packages fail to install because it seems to ignore my nuget reference.

I have found a workaround that is to remove my custom nuget.config let the build fail once it fails it will have downloaded the proper things from nuget.org and then by adding the custom file back in it will restore those microsoft packages from disk and then reachout to get my custom nuget package.

My nuget Package config looks like this:

    <?xml version=\1.0\"" encoding=\""utf-8\""?> <configuration>   <packageSources>     <add key=\""nuget.org\"" value=\""https://api.nuget.org/v3/index.json\"" protocolVersion=\""3\"" />     <add key=\""ASPNET Team\"" value=\""https://dotnet.myget.org/F/aspnetcore-ci-dev/api/v3/index.json\"" />     <add key=\""OTL\"" value=\""https://www.myget.org/F/{redacted}/api/v3/index.json\"" />   </packageSources>   <packageRestore>     <add key=\""enabled\"" value=\""True\"" />     <add key=\""automatic\"" value=\""True\"" />   </packageRestore>   <bindingRedirects>     <add key=\""skip\"" value=\""False\"" />   </bindingRedirects>   <packageManagement>     <add key=\""format\"" value=\""0\"" />     <add key=\""disabled\"" value=\""False\"" />   </packageManagement>   <disabledPackageSources /> </configuration> 

The Errors from Kudu are:

An error occurred while sending the request.     A connection with the server could not be established   Retrying 'FindPackagesByIdAsync' for source 'https://www.myget.org/F/{redacted}/api/v3/flatcontainer/microsoft.extensions.caching.sqlserver/index.json'.   An error occurred while sending the request.     A connection with the server could not be established   Retrying 'FindPackagesByIdAsync' for source 'https://dotnetmyget.blob.core.windows.net/artifacts/aspnetcore-ci-dev/nuget/v3/flatcontainer/microsoft.extensions.hosting.abstractions/index.json'.   An error occurred while sending the request.     A connection with the server could not be established   Retrying 'FindPackagesByIdAsync' for source 'https://dotnetmyget.blob.core.windows.net/artifacts/aspnetcore-ci-dev/nuget/v3/flatcontainer/microsoft.extensions.caching.sqlserver/index.json'.   An error occurred while sending the request.     A connection with the server could not be established   Retrying 'FindPackagesByIdAsync' for source 'https://dotnetmyget.blob.core.windows.net/artifacts/aspnetcore-ci-dev/nuget/v3/flatcontainer/microsoft.entityframeworkcore.tools/index.json'.   An error occurred while sending the request.     A connection with the server could not be established   Retrying 'FindPackagesByIdAsync' for source 'https://www.myget.org/F/{redacted}/api/v3/flatcontainer/microsoft.extensions.dependencyinjection.abstractions/index.json'.   An error occurred while sending the request.     A connection with the server could not be established   Retrying 'FindPackagesByIdAsync' for source 'https://dotnetmyget.blob.core.windows.net/artifacts/aspnetcore-ci-dev/nuget/v3/flatcontainer/microsoft.extensions.dependencyinjection.abstractions/index.json'.   An error occurred while sending the request.     A connection with the server could not be established   Retrying 'FindPackagesByIdAsync' for source 'https://www.myget.org/F/{redacted}/api/v3/flatcontainer/microsoft.extensions.caching.sqlserver/index.json'. 

Doing a dotnet restore directly from Kudu console yields the same results. I have pulled the NuGet.config from my development machine which i know successfully restores both microsoft packages and custom packages and attempted to use that and it still failed.

I'm beginning to think its an outbound port blocking firewall thing within azure but some googling of outbound firewall or proxy on webapp was not fruitful.

""",azure-web-app-service +52729006,"Azure Functions Mac - Wrong Host Version

I am trying to debug an Azure functions project on my Mac using Visual Studio Mac.

I have updated my core tools to version 2.0.3. If I type func at my terminal I can see I updated to the latest version.

                  %%%%%%                  %%%%%%             @   %%%%%%    @           @@   %%%%%%      @@        @@@    %%%%%%%%%%%    @@@      @@      %%%%%%%%%%        @@        @@         %%%%       @@          @@      %%%       @@            @@    %%      @@                 %%                 %  Azure Functions Core Tools (2.0.3) Function Runtime Version: 2.0.12115.0 

You can also see the runtime version is 2.0.12115.0.

However when I debug using Visual Studio Mac I get a runtime error:

Hosting environment: Production Now listening on: http://0.0.0.0:7071 Application started. Press Ctrl+C to shut down. [09/10/2018 20:30:53] Reading host configuration file 'xxxxx/bin/Debug/netstandard2.0/host.json' [09/10/2018 20:30:53] Host configuration file read: [09/10/2018 20:30:53] {} [09/10/2018 20:30:53] Starting Host (HostId=xxxxx InstanceId=0ef8b0eb-215d-4d08-9945-6dd50c8094c7 Version=2.0.11933.0 ProcessId=22941 AppDomainId=1 Debug=False ConsecutiveErrors=0 StartupCount=1 FunctionsExtensionVersion=) Function host is not running. Press any to continue....[09/10/2018 20:30:58] A ScriptHost error has occurred [09/10/2018 20:30:58] System.Private.CoreLib: Could not load type 'Microsoft.Azure.WebJobs.Hosting.IWebJobsStartup' from assembly 'Microsoft.Azure.WebJobs.Host Version=3.0.0.0 Culture=neutral PublicKeyToken=null'.

Notice the runtime version is Version=2.0.11933.0.

There must be a way to tell visual studio where the location of the Azure-Functions-Core tools is installed or can I at least copy my 2.0.3 installation to where Visual Studio is executing from where ever that is.

""",azure-functions +55670691,"Azure QueueTrigger - binding both a POCO AND a CloudQueueMessage?

We use the Singleton attribute with a scope expression against a POCO. For example:

[Singleton(\{SomeValue}\"")] public static void SomeMethod([QueueTrigger(\""somequeue\"")] SomePOCO poco) 

This works fine. We now however need to be able to change the queue message's visibility timeout and thus need access to the CloudQueueMessage itself since CloudQueue UpdateMessage() requires the CloudQueueMessage. However after much reading of documentation (and trial and error) it seems that both a POCO AND a CloudQueueMessage cannot be bound in the method signature - or at least I cannot figure out how to do it.

I have gone through documentation on creating your own custom bindings but:

  1. Its not clear that I can get the CloudQueueMessage this way since there seems to be some internal implementation interfaces in the WebJobs SDK to instantiate the CloudQueueMessage and
  2. It seems a lot of work to do something I would of expected to be a somewhat common use case.

What am I missing in this scenario - that is is there a simple way of declaring both a POCO and CloudQueueMessage binding or do I have to create a custom binding to get the CloudQueueMessage (and are there any hints to do this)?

Cheers

""",azure-storage +33890149,"OpsHub VSO Migration Utility Error - could not initialize a collection

When I try to see migration progress in the utility I now get an error. It is showing this for all migrations I have set running. I have tried rebooting the machine but same error as below. Thanks in advance for any help.

See this link to a screenshot of error: could not initialize a collection: [com.opshub.dao.eai.EAIIntegrationsContext#1]

""",azure-devops +48714233,"Azure Account-Level SAS Token: Possible to Have a Policy?

I found how to create an Azure account-level SAS token with PowerShell. Link However the cmdlet New-AzureStorageAccountSASToken does not appear to accept a -Policy parameter.

Does this mean that there cannot be a SAS policy on an account-level SAS token? An implication would be that if the token were compromised one could not just remove the policy but would have to change the key.

""",azure-storage +4171881,"Why does Html.DisplayFor and .ToString(\C2\"") not respect CurrentUICulture?

In my ASP.MVC 2.0 website I have the following setting in web.config:

<globalization uiCulture=\""da-DK\"" culture=\""en-US\"" /> 

When I try to display an amount in a view using Html.DisplayFor() or ToString(\""C2\"") I expected to get \""kr. 3.500 00\"" (uiCulture) and not \""$3 500.00\"" (culture).

<%:Html.DisplayFor(posting => posting.Amount)%> <%:Model.Amount.ToString(\""C2\"")%> 

If I explicit uses CurrentUICulture info it works as expected but I don't want to do that everytime I need to display a number date or decimal. And I also like to use DisplayFor which doesn't support the IFormatProvider parameter.

<%:Model.Amount.ToString(\""C2\""  System.Globalization.CultureInfo.CurrentUICulture)%> 

How can I change the formatting without changing the culture of the system?

This is running in Azure and if I change the culture to \""da-DK\"" all decimal points are lost when saving to Azure Table storage! #BUG

""",azure-storage +56825285,"Where do I find my logs in azure app services?

I have enabled logging in my web app running on azure web services I can see the log output if I enable log streaming but I cannot find any GUI where I can find the logs so where are they?

I have defined my logging as follows in program.cs

WebHost.CreateDefaultBuilder(args)                 .UseApplicationInsights()                 .ConfigureLogging((hostingContext  logging) =>                 {                 logging.AddConfiguration(hostingContext.Configuration.GetSection(\Logging\""));                 logging.AddConsole();                 logging.AddDebug();                 logging.AddEventSourceLogger();                 logging.AddApplicationInsights();                 })                 .UseStartup<Startup>(); 

And in my API controller I am simply doing this

 private readonly ILogger _logger;         public ReveController(ILogger<Controller> logger)         {             _logger = logger;         } 

Followed by

_logger.LogInformation(\""Test test test\""); 

My logging settings in appsettings.json looks as follows

\""Logging\"": {     \""LogLevel\"": {       \""Default\"": \""Warning\""     }   }  

I looked on the app service and in App Insights but nowhere in the GUI can I find the entries where are they?

Am I missing something?

""",azure-web-app-service +57280331,"How does Nuget Restore task work? Does nuget.config in the solution have to match that of in the build server?

\""enterI am new to Nuget and have added a Nuget restore step to install dependencies on the build server. When I looked up the Nuget restore we need the Nuget.config file under the solution folder as well as in the local build machine where the build will be running. %Appdata%/Nuget/Nuget.config

My question is do the two nuget.config files need to match? Does the nuget.config file in the source repository replace the nuget.config in the build server during the build?

""",azure-devops +51830440,"Reset root account password of a CentOS 6.9 vm on azure?

I have a CentOS 6.9 vm on azure it was created from a vhd disk that was uploaded. The root account has been blocked because of failed login attemps. So i wonder if there is a way to reset the root account password?

I've tried the steps of this post: https://serverfault.com/questions/680460/how-to-reset-root-password-on-a-linux-vm-on-windows-azure?newreg=5d2142cb59ba400a9def39ad7245fc74

But the Azure Cli stucks at Installing extension \""VMAccessForLinux\"" VM: \""API-GW\""

Any ideas on how to solve this?

""",azure-virtual-machine +46166865,Getting error when trying to call API inside Web App through another Web App - Azure

I have two Web Apps inside the same App Service. One is a back-end portion (with API on it using .NET Core SSL cert installed) and the other one is the front-end (ReactTS created using create-react-app).

When I try to call the API method (an Auth method) using my Front-end I got this message as response:

Login failed: The resource you are looking for has been removed had its name changed or is temporarily unavailable.

-404 error

Another fact is if I run my ront-end solution locally I can use the API (published on the Web App) normally.

My API URL is set inside the package.json file as proxy.

My first thought was about an CORS problem but it throws a 404 error.

Any configuration that I can do on my Azure or something that I need to change in my application to allow my front-end to communicate with my API?

,azure-web-app-service +44259047,"How to remove App Service Certificate resource

So I have SSL certificate bought directly using Azure portal.

Now I migrated from Azure and want to delete every resource from Azure except my SQL Server and database.

When I try to delete App Service Certificate I have this error:

Operation name Delete the App Service Certificate Time stamp Tue May 30 2017 11:47:36 GMT+0200 (W. Europe Standard Time) Event initiated by - Description Failed to delete the App Service Certificate. : Delete for 'JerrySwitalski' App Service Certificate failed because there are still imported certificates derived from the App Service Certificate in the source subscription. Imported certificates: /subscriptions/77cf2897-8c03-413c-8e16-38ea0e025d72/resourceGroups/01/providers/Microsoft.Web/certificates/JerrySwitalski-01-SouthCentralUSwebspace /subscriptions/77cf2897-8c03-413c-8e16-38ea0e025d72/resourceGroups/01/providers/Microsoft.Web/certificates/JerrySwitalski-01-WestEuropewebspace

As you can see below I have only SQL Server and database and this certificate: \""enter

How can I remove for good this certificate?

""",azure-web-app-service +50789319,Does Azure App Services come with a default WAF?

I understand that one can develop an App Service Environment and put a WAF within it to protect an AppService.

What I'd like to know is:

  • whether there is a default WAF provided by Microsoft -- even if rudimentary -- in front of App Services that are not within an ASE.
  • if there wasn't one or one wanted to put another one can one actually put a WAF in front of a non-ASE AS (doesn't the AppService have a public IP that would always be available?)

Thank you!

PS: Any link to documentation that than can be referenced either way would be greatly appreciated.

,azure-web-app-service +24691018,How to access Azure storage queue by JavaScript

For our testing purpose we would like to access Azure storage queue directly with JavaScript instead of preparing a new web service.

Is this possible? What should we do to achieve this since I cannot find the official documentation for JavaScript API of Azure storage.

,azure-storage +11767055,How can I shake relational database thinking for designing an azure table storage datastore?

I have been trying to get a good grasp of Azure Table storage for a little while now and while I understand generally how it works I am really struggling to shake my relational database thinking. I usually learn best by example so I'm wondering if someone can help me out. I'm going to outline a simple setup for how I would solve a problem using a relational database can someone help guide me to converting it to use Azure Table storage?

Lets say that I have simple note taking app it has users and each user can have as many notes as they want and each note can have as many users (owners or viewers) as it needs. If I were going to deploy this using a relational database I would likely deploy it as follows:

For the database I'd start with something like this:

CREATE TABLE [dbo].[Users](     [ID] [int] IDENTITY(1 1) NOT NULL      [Username] [nvarchar](20) NOT NULL)  CREATE TABLE [dbo].[UsersNotes](     [ID] [int] IDENTITY(1 1) NOT NULL      [UserID] [int] NOT NULL      [NoteID] [int] NOT NULL)  CREATE TABLE [dbo].[Notes](     [ID] [int] IDENTITY(1 1) NOT NULL      [NoteData] [nvarchar](max) NULL)         

I would then setup a relationship between Users.ID and UsersNotes.UserID as well as Notes.ID and UsersNotes.NoteID with constraints to enforce referential integrity.

For the application I would have an ORM generate some entities with matching name properties for each of these and I'd probably call it a day:

public class Users {     public int ID { get; set; }     public String Username { get; set; } } // and so on and so forth 

I realize that this design is fully dependent on the relational database and what I'm looking for is some advise on how to shake this train of thought to use Azure Table storage or any other non-relational data storage techniques.

Lets also assume for the sake of argument that I've installed the Azure SDK and have played around with it but my working knowledge of using the SDK is limited I'd rather not focus on that but rather what a good solution to the above would look like. A good starting point will help make the SDK make sense to me since I'll have a point of reference.

For the sake of completeness lets say that

  • Note data will change frequently when first created and taper off over time
  • Users will have many notes and notes may have multiple users (not concurrent just viewers)
  • I expect fairly few users (low hundreds) but I expect a fair number of notes (low hundreds per user)
  • I expect to query against Username the most and then show the notes the user has access to
  • I also expect when viewing a Note to show the other users with access to that note a reverse lookup
,azure-storage +56393355,Unable to build solution in VSTS pipeline

I am working on a service fabric solution and I am able to build and test the changes in my local machine whereas in VSTS pipeline and facing a minor issue saying that the interface method is not implemented.

If anyone of you face a similar issue Can you guys suggest or help me out on how to fix or resolve this issue in the build pipeline.

Here is my scenario.

Interface

public interface IStudent {     void PrintFullName(); } 

Base Class

public class BaseStudent {     public void PrintFullName()     {         // Implementation     } } 

MainClass

public class Student : BaseStudent  IStudent {     public void PrintName()     {         // Implementation     } } 
,azure-devops +49369681,Limit the number of messages processed by a queue triggered frunction

I have a queue triggered function that POSTs to a server. I want to limit the number of posts made to the server to 1 post/ every 3 seconds. Is there a way to do this? If so how?

,azure-functions +52099198,"How to prevent a development staging website hosted on Azure from being indexed by search engines

Specific to Web Apps hosted on Microsoft Azure is there a way to prevent the mydomain.azurewebsites.net URL from being indexed by search engines? I'm planning to use a web app as a staging website and don't want it to accidentally get indexed.

I know I could add a robots.txt file to the project with everything set to no-index but I don't want to ever accidentally publish it to the production site (or alternatively forget to publish it to the staging website).

Is there a setting in Azure that will prevent the \.azurewebsites.net\"" domain from being indexed? Or if the robots.txt file is the only way how do you keep it organized so that the right robots.txt file is published to staging and production using ASP.NET Core.

""",azure-web-app-service +45330982,"Failed comunication between Logstash and RabbitMQ (both in one separate MS Azure Virtual machine)

I have created two service in Azure \RabbitMQ\"" and \""ELK\"" (both in one separate MS Azure Virtual machine).

My MVC ASP.net application write in RabbitMQ (I have to configure an endpoint (inbound security rule) RabbitMQ (TCP/5672) and I can see in RabbitMQ webadmin my queue. Logstash (in vm ELK) write correct index for \""logstash-*\"" collect data from apache log.

I think I missing some endpoint configuration in vm. But I don't know what. Or I have put some incorrect configuration my logstash config /opt/bitnami/logstash/conf/logstash.conf :


input { rabbitmq { type => \""myqueuelog\"" host => \""x.x.x.x\"" # ip address to vm rabbitmq port => 5672 vhost => \""myvhost\"" queue => \""myqueuelog\"" auto_delete => false codec => \""plain\"" exclusive => false heartbeat => 30 durable => true password => \""mypass\"" user => \""user\"" } rabbitmq { type => \""myqueueerror\"" host => \""x.x.x.x\"" # ip address to vm rabbitmq port => 5672 vhost => \""myvhost\"" queue => \""myqueueerror\"" auto_delete => false codec => \""plain\"" exclusive => false heartbeat => 30 durable => true password => \""mypass\"" user => \""user\"" } beats { ssl => false host => \""0.0.0.0\"" port => 5044 } gelf { host => \""0.0.0.0\"" port => 12201 } http { ssl => false host => \""0.0.0.0\"" port => 8888 } tcp { mode => \""server\"" host => \""0.0.0.0\""      mode => \""server\""     host => \""0.0.0.0\""     port => 5010 } udp {     host => \""0.0.0.0\""     port => 5000 } }  filter { if [type] == \""myqueuelog\"" { # split the message field (in json format) json { source => \""message\"" # where is the json } } if [type] == \""myqueueerror\"" { # split the message field (in json format) json { source => \""message\"" # where is the json } } }  output {  if [type] == \""myqueuelog\"" {         elasticsearch         {             codec => \""json\""             hosts => [\""127.0.0.1:9200\""]             document_id => \""%{logstash_checksum}\""             index => \""myqueuelog-%{+YYYY}\""            # protocol => \""http\""             manage_template => false         }     } if [type] == \""myqueueerror\"" { elasticsearch { codec => \""json\"" hosts => [\""127.0.0.1:9200\""] document_id => \""%{logstash_checksum}\"" index => \""myqueueerror-%{+YYYY}\"" # protocol => \""http\"" manage_template => false } }  } 

and I have found a log error in LOGSTASH:

[ERROR][logstash.pipeline] Error registering plugin {:plugin=>\""false  host=>\\\""0.0.0.0\\\""  port=>8888  id=>\\\""ccfb769a49le707254a6d272716b8aa18d57dfec-18\\\""  enable_metric=>true  codec=>\\\""plain_0045le0e-9c30-4d95-b400-97843e19022e\\\""  enable_metric=>true  charset=>\\\""UTF-8\\\"">  threads=>4  verify_mode=>\\\""none\\\""  additional_codecs=>{\\\""application/json\\\""=>\\\""json\""}  response_headers=>{\\\""Content-Type\\\""text/plain\\\""}>  :error=>\""Address already in use - bind - Address already in use\""} [ERROR][logstash.agent] Pipeline aborted due to error {:exception=>#  :backtrace=>[\""org/jruby/ext/socket/RubyTCPServer.java:118:in 'initialize'\""  \""org/jruby/RubyIO.java:871 ...etc... 

Thanks in advance

""",azure-virtual-machine +48012689,Azure App Service Plan CPU spikes for no obvious reason

We're experiencing CPU spikes on our Azure App Service Plan for no obvious reason. Its not something that stops the service but we'd like to have an understanding of when&how that kind of things happen.

For example CPU percentage sits at 0-1% range for days but then all of the sudden it spikes to 98% 45% 60% and comes back to 0-1% range very quickly. Memory stays unchanged at comfortable 40-45% level no incoming requests to it no web jobs nothing unusual in logs no failures service health ok nothing we could point our finger to as a reason. We tried to find out through kudu > support > analyze (metrics)...but we couldn't get request submited. It just keeps giving error to try later.

There is only one web app running in that app service plan its a asp.net core 2.0. web api.

Could someone shed some light on this kind of behavior? Is this normal expected? If so why it happens? Is there a danger that it spikes to 90% and don't immediately come back?

Just what's going on?

,azure-web-app-service +21256636,"How do I add a blob to Windows Azure?

I created a storage account on Windows Azure. Then I added a container called rubicon to my storage account and got to the following screen:

\""enter

I don't see any button/link that allows me to add a blob. Would be nice to see a \""Click here to add blob\"" link on this page but there is nothing. I did have a look on How to use blob storage but that only shows how do do it though code.

Where do I add a blob on the Windows Azure portal?

""",azure-storage +48903289,"web API in Azure: Authorization has been denied for this request

I have a SQL Azure database up and running. The connection string I'm using for one web api application.

When I'm running the application through Visual Studio using SQL Azure connection string I'm not getting any Authorization denied response.

Now I deployed my web api application to Azure and when I'm trying to access any API controller it's saying Authorization has been denied for this request.

I also checked Authentication / Authorization settings for my App Service and it's.... Anonymous access is enabled on the App Service app. Users will not be prompted for login.

\""enter

Why I'm getting Authorization denied response. Is there any settings I'm missing?

""",azure-web-app-service +19530753,How to get system time using windows azure powershell

How to find system time using windows azure powershell? I want time only and it should be in 24 hour format. I have tried get-date and [system.datetime]::now I want only time. I don't want date.

,azure-storage +32297378,"How can I parameterise Azure's TableOperation.Retrieve in a method in c# .NET

I have a class something like the following:

public class Table : ITable     {         private CloudStorageAccount storageAccount;         public Table()         {             var storageAccountSettings = ConfigurationManager.ConnectionStrings[\AzureStorageConnection\""].ToString();             storageAccount = CloudStorageAccount.Parse(storageAccountSettings);         }         public async Task<TableResult> Retrieve(string tableReference  string partitionKey  string rowKey)         {             var tableClient = storageAccount.CreateCloudTableClient();             var table = tableClient.GetTableReference(tableReference);             TableOperation retrieveOperation = TableOperation.Retrieve<SomeDomainModelType>(partitionKey  rowKey);             TableResult retrievedResult = await table.ExecuteAsync(retrieveOperation);             return retrievedResult;         }     } 

This class is a wrapper to retrieve a single entity from an Azure table. It's wrapped up and conforms to an interface so that it can be stubbed out with Microsoft Fakes during testing. It works at the moment however it would be more elegant if the following was more generic:

TableOperation retrieveOperation = TableOperation.Retrieve<SomeDomainModelType>(partitionKey  rowKey); 

My question is how can I parameterise <SomeDomainModelType> so that I can use the method with any type in the domain model ? Any ideas?

""",azure-storage +49107284,"Cannot create a VSTS webhook subscription for punlisherId = tfs and eventId tfvc.checkin via the REST API

I am trying to create a VSTS webhook subscription for publisherId= tfs and eventType= tfvc.checkin. Here's the sample Post request :

Url : https://testvstsaccount.visualstudio.com/_apis/hooks/subscriptions?api-version=1.0

Request Body :

{   \""publisherId\"": \""tfs\""    \""eventType\"": \""tfvc.checkin\""    \""resourceVersion\"": \""1.0-preview.1\""    \""consumerId\"": \""webHooks\""    \""consumerActionId\"": \""httpRequest\""    \""publisherInputs\"": {     \""path\"": \""$/\""   }    \""consumerInputs\"": {     \""url\"": \""https://myservice/myhookeventreceiver\""   } } 

I am getting 400 Bad Request in response.

Response body :

{   \""$id\"": \""1\""    \""innerException\"": null    \""message\"": \""Subscription input 'path' is not supported at scope 'collection'.\""    \""typeName\"": \""Microsoft.VisualStudio.Services.ServiceHooks.WebApi.SubscriptionInputException  Microsoft.VisualStudio.Services.ServiceHooks.WebApi  Version=14.0.0.0  Culture=neutral  PublicKeyToken=b03f5f7f11d50a3a\""    \""typeKey\"": \""SubscriptionInputException\""    \""errorCode\"": 0    \""eventId\"": 4501 } 

Can someone please help me understand the correct way to create this webhook.

""",azure-devops +53458687,"Get instanceID in Http triggered orchestration Starter function in NodeJS

InstanceId: (Optional) The unique ID of the instance. If not specified a random instance ID will be generated.

is there a way to get the random generated instance ID in Http triggered orchestration Starter function in nodeJs language?

""",azure-functions +49931526,Need to setup port forwarding to port 5552 on azure VM

I am doing penetration testing with a RAT (Remote Access Tool) program and it requires a listening port to be open for its use (no preference but i'm using port 5552). I was looking for that function on the Microsoft Azure Portal but I couldn't find anything.

Basically I want to keep open my RDP port (3389) and at the same time forward incoming requests for port 5552 to the VM (I don't know if this makes sense to anyone but...).

Thanks in advance!

,azure-virtual-machine +43346822,"Using azure function output parameters with Dropbox connector

My flow is very simple: I want to have an azure function that runs once a day and they use its output to create a file in Dropbox.

The function does some processing and returns an object with 2 properties a FileName and a FileContent both are strings.:

return new AzureFunctionResponse {     FileName = $\TestFile-{DateTimeOffset.UtcNow.ToUnixTimeMilliseconds()}\""      FileContent = \""This is the file content\"" }; 

My problem is that I don't know how to use those 2 properties to setup my Dropbox connector

Here's my LogicApp flow:

\""enter

I'd like to use the FileName and FileContent returned from my AzureFunction to populate the respective field in the Dropbox connector but I have no idea how to set this up. I've looked for documentation but maybe I'm not looking at the right place because I'm not finding anything.

Also here are the bindings in my function.json file if that can be of any help.

{   \""disabled\"": false    \""bindings\"": [   {       \""type\"": \""httpTrigger\""        \""direction\"": \""in\""        \""webHookType\"": \""genericJson\""        \""name\"": \""req\""     }      {       \""type\"": \""http\""        \""direction\"": \""out\""        \""name\"": \""res\""     } } 
""",azure-functions +56477337,".NET Core WebApp multiple domains changing SQL connection string based on hostname can't inject httpcontext into DB Context.cs file

I am using Azure .NET core WebApp MVC Entity Framework scaffolded in an existing external MS SQL database.

I register the DB in the startup ConfigureServices like this:

services.AddDbContext<MyDbContext>(options => options.UseSqlServer()); 

And it works fine so long as I set the connection string in the MyDbContext.cs OnConfiguring() method like this:

        protected override void OnConfiguring(DbContextOptionsBuilder optionsBuilder)         {                 optionsBuilder.UseSqlServer(\""Server=10.10.10.10;Database=MyDb;user id=username;password=password;\"");         }  

Then I can simply in a Controller say:

        private readonly MyDbContext _context;         public HomeController()         {             _context = new MyDbContext();         }          public IActionResult Index()         {             var item = _context.TableName.FirstOrDefault(d => d.Id > 0);         } 

And this works fine - data flows in.

My problem is that I need to change the SQL connection string depending on what hostname is connecting. So either in the startup ConfigureServices() I can pass the connection if I can establish the hostname being used or in the DB's MyDbContext.cs file's OnConfiguring() method.

I can get the hostname only from the httpcontext and that isn't available to query in the startup so far as I can tell?

If I try and inject it into the context.cs DB file like this:

    public partial class MyDbContext : DbContext     {         private readonly IHttpContextAccessor _httpContextAccessor;          public MyDbContext()         {         }          public MyDbContext(IHttpContextAccessor httpContextAccessor)         {             _httpContextAccessor = httpContextAccessor;         // For context/url         }          public MyDbContext(DbContextOptions<MyDbContext> options  IHttpContextAccessor httpContextAccessor)             : base(options)         {             _httpContextAccessor = httpContextAccessor;         // For context/url         } 

Then the startup ConfigureServices line services.AddDbContext can't pass the httpcontext (it doesn't exist at that point?) none of the constructor methods match - I can't inject the IHttpContextAccessor httpContextAccessor into the DB context method no matter how I try!

services.AddDbContext<MyDbContext>(options => options.UseSqlServer(\""some-connection-string\"")); // Doesn't pass httpcontext services.AddDbContext<MyDbContext>()); // Doesn't pass httpcontext services.AddDbContext<MyDbContext>(is there way to pass it?);   

It doesn't seem to work like injecting it into Controllers which does work fine as there's no constructor there...

Any ideas on how I can find the hostname and change the SQL connection string given this setup?

For various reasons this needs to be a single WebApp that multiple domains use by the way.

edit

After a sleepless night I woke up and decided to simply connect to all (three) databases I need and then decide which context to use in the controller as I have the httpcontext there to decide what host is connecting. It's not ideal but this is a low overhead WebApp so I'm happy enough to go with it like that. I think perhaps there is/was a solution out there though...

""",azure-web-app-service +45909870,"Azure function - VS2017 Tooling - Missing binding in function.json

I created a simple HttpTrigger Azure function using VS 17 15.3 (with the nuget package Microsoft.NET.Sdk.Functions 1.0.2) with the wizard. It gave me the following code:

public static class Function1 {     [FunctionName(\""Function1\"")]     public static HttpResponseMessage Run([HttpTrigger(AuthorizationLevel.Anonymous  \""get\""  \""post\""  Route = \""HttpTriggerCSharp/name/{name}\"")]HttpRequestMessage req  string name  TraceWriter log)     {         log.Info(\""C# HTTP trigger function processed a request.\"");          // Fetching the name from the path parameter in the request URL         return req.CreateResponse(HttpStatusCode.OK  \""Hello \"" + name);     } } 

When I run the function in debug mode with VS and calling it with Postman it's working fine I have the body response.

When I start the same function using the CLI: func host start and calling it with post man the function doesnt return the body. I have an Http 200 with an empty content. :(

I found that in the generated function.json there is no http out binding. My generated function.json

{      \""generatedBy\"": \""Microsoft.NET.Sdk.Functions-1.0.0.0\""       \""configurationSource\"": \""attributes\""       \""bindings\"": [          {              \""type\"": \""httpTrigger\""               \""route\"": \""HttpTriggerCSharp/name/{name}\""               \""methods\"": [ \""get\""  \""post\"" ]               \""authLevel\"": \""anonymous\""               \""name\"": \""req\""          }      ]       \""disabled\"": false       \""scriptFile\"": \""..\\\\bin\\\\FunctionApp1.dll\""       \""entryPoint\"": \""FunctionApp1.Function1.Run\""  } 

When I add the http out binding It's working fine using func host start

{      \""generatedBy\"": \""Microsoft.NET.Sdk.Functions-1.0.0.0\""       \""configurationSource\"": \""attributes\""       \""bindings\"": [          {              \""type\"": \""httpTrigger\""               \""route\"": \""HttpTriggerCSharp/name/{name}\""               \""methods\"": [ \""get\""  \""post\"" ]               \""authLevel\"": \""anonymous\""  \""name\"": \""req\""          }           {              \""type\"": \""http\""               \""direction\"": \""out\""               \""name\"": \""res\""          }      ]       \""disabled\"": false       \""scriptFile\"": \""..\\\\bin\\\\FunctionApp1.dll\""       \""entryPoint\"": \""FunctionApp1.Function1.Run\""  } 

It's very strange that in debug mode it's work and not using cli directly...

Thanks for your help

""",azure-functions +50825960,Azure - Uninstall IaaSDiagnostics Extension after manually deleting Storage account

I need to uninstall the IaaSDiagnostics Extension for my VM.

However I have manually deleted the diagnostic storage account and now when I try to uninstall the diagnostic extension I get an error saying:

Provisioning state Provisioning failed.

StorageAccount 'xxxxxdiag160' associated with VM 'xxxxxx' for boot diagnostics encountered an error. Please look at the error code for more information about the error.. StorageAccountNotFound

Provisioning state error code ProvisioningState/failed/StorageAccountNotFound

How can I delete the IaaSDiagnostics Extension now with the Storage Account associated already deleted?

,azure-storage +49269908,"Repository info in Visual Studio Solution

I copied and pasted an existing solution with multiple projects into a new folder. The original solution is bound to a repository on VSTS.

I was careful to copy only the actual project files I created along with the sln file. When I opened the new version I got the following messages.

\""enter

\""enter In the sln file I don't see any information about the repo. Where is the repo information stored? What file do I need to edit to remove all references to a repository?

P.S. I'm using Visual Studio 2017 on my computer and on VSTS I'm using TFVC for version control.

Update: When I go to File > Source Control > Advanced > Change Source Control I see no bindings. See image below. \""enter

When I click the \""Team Explorer\"" I get the following message. \""enter

I don't think I have any workspaces configured. See below: \""enter

""",azure-devops +9186200,How can I automate publishing and referencing static content to Azure Blob Storage?

To optimise the speed of my MVC3 Azure site it has been suggested that I should host my JS CSS and image files (basically any static content) in Azure public containers with CDN enabled. These should then be linked to instead of being stored on the web role.

Is there anyway to automate this as part of a publishing the solution? So that I get underlining etc in VS2010?

Effectively anything that is stored in the local MVC3 content & scripts folder should be copied to Azure storage and referenced from there.

It seems like something that should be a straightforward option. Am I missing something obvious

Thanks

,azure-storage +41951912,Azure functions experimental templates

I am trying to use Azure functions and I see that there are Sample and Experimental type of templates. Can I trust experimental templates in production environment?

,azure-functions +43781176,Best option to schedule payments: azure scheduler WebJob or Azure Functions or a Worker Role?

I've hosted my website on azure and now I want to schedule payments on a monthly basis. I am using Authorize.net for payments but I cannot use their recurring billing feature as it gives very little control. I've to perform checks in the database make payments and update records. What should I use Azure Scheduler Azure WebJob or Azure Functions a Worker Role?

,azure-functions +28840249,Set expiry limit for blob

I'm using Azure-Storage for storing information like a cache mechanism. So for given input I'm doing the job for first time and after that I'll save the result in the cache for further use. When I'll need to solve the problem with same given input I'll get it directly the already ready solution from storage. This all is implemented.

I'm trying to add a expiry limit for file in my cache. Each result will be stored for maximum 30 days. After that they will be automatically deleted.

The naive solution is to implement also an background worker that will run one time per day and will run over all files and delete them according to their creation time.

There are better solution?

,azure-storage +47348450,"Build exe and webapp using msbuild commandline

I am trying to build a solution using ms-build command line which contains multiple projects.

Four of them are creating an exe file as output and rest are creating a single web application.

Now when I try to build them together using msbuild it throws out error-

/p:WebPublishMethod=Package /p:DeployOnBuild=true /p:PackageAsSingleFile=true /p:SkipInvalidConfigurations=true  /p:outdir=\$(build.artifactstagingdirectory)\\\\\"" 

Error -

error MSB4057: The target \""ResolveWebJobFiles\"" does not exist in the  project. 

Note - If I remove property \""/p:WebPublishMethod=Package\"" then it runs well but doesn't create the zip file.

Can anyone please suggest me any property by which I can create the zip file?

""",azure-devops +55024741,Can I run multiple instances of an azure function listening to the same queue

Background: I have been running into an issue recently that my function can not handle the load and the queue messages are building up. In the long term I am looking at the code to find where there are bottle necks but in the short term I need to solve this problem.
Question: Can I add multiple instances of the same azure function (even if it is a rename myjobrunner1 myjobrunner2) etc that all listen to the same queue? Would this help in my situation?

Some caveats:
The premium plan looks good but I can not test a preview while in production at the moment.
Adding a dedicated AppService is an option but it is a longer term fix. I am having the trouble now.
Code fixes are in process to handle the load better and improve performance but the fact that the outside services are what is holding them up is a factor.

,azure-functions +43810248,"Powershell for an Advanced Application Restart on an Azure Web App

It is possible to use Restart-​Azure​Rm​Web​App PowerShell to restart a web app but this will restart all servers in the plan simultaneously giving a short downtime.

The Azure Portal has an \Advanced Application Restart\"" feature that uses a time delay between restarting individual instances.

Is there any way to invoke that from PowerShell?

""",azure-web-app-service +51513795,"Create instance of an object in a new app domain in azure function throws a FileNotFoundException

I need to run some code in a new app domain. So I am trying to create an instance of my object in the new app domain... Here te code I am using:

public static class Program {     private static ITemplateEngineProvider _templateEngineProvider;      static Program()     {         AppDomain ad = AppDomain.CreateDomain(\New domain\"");          ObjectHandle handle = ad.CreateInstance(                 assemblyName: typeof(RazorTemplateEngineProvider).Assembly.FullName                  typeName: \""RazorTemplateEngineProvider\""                 //ignoreCase: false                  //bindingAttr: BindingFlags.CreateInstance                  //binder: null                  //args: new object[] { new string[] { templatePath  layoutPath } }                  //culture: CultureInfo.InvariantCulture                  //activationAttributes: null                 );              _templateEngineProvider = (RazorTemplateEngineProvider)handle.Unwrap();     } } 

RazorTemplateEngineProvider is a custom public class that has a public constructor. It has been implemented in a class library (MyCustomLib.dll) I referenced inside the azure function. The class implements an interfaces defined in another class library (IMyCustomLib.dll) referenced only by the previous class library not by azure function.

At the moment there is no code inside the RazorTemplateEngineProvider class:

public class RazorTemplateEngineProvider : MarshalByRefObject  ITemplateEngineProvider {     public RazorTemplateEngineProvider()     { } } 

When I try to do ad.CreateInstance a FileNotFoundException has been thrown:

Could not load file or assembly 'MyCustomLib.dll Version=1.0.0.0 Culture=neutral PublicKeyToken=...' or one of its dependencies. The system cannot find the file specified.

But the file exists and it should be already correctly loaded... Infact if I run this 'query'

IEnumerable<string> loadedAssemblies = AppDomain.CurrentDomain.GetAssemblies()    .Where(a => !a.IsDynamic && !a.FullName.Contains(\""Version=0.0.0.0\"") && File.Exists(a.Location) && !a.Location.Contains(\""CompiledRazorTemplates.Dynamic\"") &&  a.FullName.Contains(\""My\""))    .Select(f => f.FullName)    .ToArray(); 

I see both my dll. So why do I get that error?

Thank you

UPDATE

I think problem is azure because I copied and pasted my code in a console application and thereit works.

UPDATE

I am watching the fusionlong and it seem it is trying to load the assembly from a \""wrong\"" path: file:///C:/Users/csimb/AppData/Local/Azure.Functions.Cli/1.0.12/MyCustomLib.dll.. I expected the path was the bin folder...

""",azure-functions +51346603,"Visual Studio Team Explorer sync and pull error

I am trying to push my change code to a VSTS git repository. When I am trying to sync I get an error in my output window

Git failed with a fatal error. ArgumentNullException encountered.
Value cannot be null. Parameter name: path From https://xxxxxxxxxxxxxxxxxxxx.visualstudio.com/_git/xxxxxxxxxxxxx = [up to date] master -> origin/master

but the code is successfully pushed to my VSTS repository And this error only comes when I tried to pull on VSTS

""",azure-devops +18765900,"Subdomain on Azure VM IIS (x.y.cloudapp.net)

Im pretty new with azure but iv'e set up a VM running windows 2008 server with IIS hosting a Umbraco Solution...

I can browse the site perfectly using \x.cloudapp.net\"". But i have setup some hostnames in Umbraco for subsites.

Fx. i got \""y.x.cloudapp.net\"". and its also added to IIS bindings. But this is not browsable at all?

""",azure-virtual-machine +48318536,"Cloning repository using GIT on Powershell ISE - Proxy issues

I am getting proxy errors using PowerShell ISE with Git-Posh.

When using GIT only (Bash) cloning goes fine. I had to add these to the .gitconfig file

[http]     proxy = http://localhost:1800  [https]     proxy = http://localhost:1800 

However when using Git-posh on a PowerShell ISE script I get exceptions:

1) Command one generates this exception.

$resp = Invoke-WebRequest -Headers $headers -Uri (\{0}/_apis/git/repositories?api-version=1.0\"" -f $url) $json = convertFrom-JSON $resp.Content 

Invoke-WebRequest : Proxy Authorization Required Description: Authorization is required for access to this proxy

2) Cloning generates this exception

 git clone --mirror $url 

fatal: unable to access 'https://pn%fastCars.onmicrosoft.com:3kfokgwgwgwiigjiwjgjwigiiqegqegewrwghdasdasfggaffaa@fastCars.visualstudio.com/Ferrari/_git/FerrariF50-PerformanceTests/': Failed to connect to localhost port 1800: Connection refused

Does anyone know which command can turn this around?

""",azure-devops +49283453,Azure portal. What is the best practise way to store and make deal with not single line application settings

Could someone advise what is the best way to work with big application configurations which look like xml/json data? These data contain diff information(mostly static but rarely it can be changed) but all of these data don't have security value. For instance it can be item options for an user control(like dropdown) in an application page or static data which is used as markup on the basis of which an web page creates a user control for page and so on.

I have several approaches for this:

  1. Key vault. As I can undestand the main idea of this storage is to work with security data like connection string passwords and so on. How about using it to work with bigger and wider settings? The big plus for me that this way contains built-in cache functionality but it doesn't look like best practise way for me.
  2. Storage account/Cosmos db - as far as I see both of these ways are used similar and can be used for my target. The question is what is the most economic and productive way for me and will these ways better then the Key Vault way?

So what is the most common solution for this target?

Thanks.

,azure-storage +41869897,"Error when calling method in custom assembly from Azure functions

see screenshot for the express version:
\""screenshot

I have an Azure function (in Visual Studio) that triggers correctly an Service Bus event. In its run-method I want to call a method in a custom assembly. This works ok until I use any method that uses Dynamics CRM assemblies. (I have tried both the assemblies from the downloadable sdk and the nuget package. I get the exact dll it asks for in the error message. As soon as I call the my method I get the error below. I can run this exact method from a console app. (my custom assembly is a standard (not core) class library...

Additional information: Could not load file or assembly 'Microsoft.Xrm.Sdk Version=8.0.0.0 Culture=neutral PublicKeyToken=31bf3856ad364e35' or one of its dependencies. The system cannot find the file specified.

""",azure-functions +42661047,start VM on RDP request

I have a windows 10 VM in azure. I connect to this machine RDP. Because most of the time this machine is not in use (I'm not using it) I'd like in order to save the costs to shut down it. My issue is to get back to connect to it RDP after that. How do I start the machine remotely?

,azure-virtual-machine +45325736,"PartitionKey was not specified in azure table storage

I am trying to load/import the data into table storage from a csv file via azure storage explorer but I am getting the following error as

An error occurred while opening the file 'D//sample.csv'.the required property 'Partitionkey' was not specified.

\""enter

Kindly clarify the importance of Partitionkey and Rowkey in azure table storage?

""",azure-storage +23518078,"store nlog generated logs to Azure Blob Storage in seperate columns

I've enabled diagnostics logging to Blob Storage for an Azure Website I am trying on. I've also set Nlog to write to Trace so that they are then in turn written to the Azure blob. Nlog layout is set to CSV. This works and the generated logs are outputted to the blob storage. If this was logging to a traditional file that file would be a CSV file which I can open in excel to analyse better the log files.

Nlog configuration file copied below:

<?xml version=\1.0\"" encoding=\""utf-8\"" ?> <nlog xmlns=\""http://www.nlog-project.org/schemas/NLog.xsd\""       xmlns:xsi=\""http://www.w3.org/2001/XMLSchema-instance\"">    <!--    See http://nlog-project.org/wiki/Configuration_file    for information on customizing logging rules and outputs.    -->   <targets>     <!-- add your targets here -->      <!--     <target xsi:type=\""File\"" name=\""f\"" fileName=\""${basedir}/logs/${shortdate}.log\""             layout=\""${longdate} ${uppercase:${level}} ${message}\"" />     -->     <target xsi:type=\""Trace\"" name=\""trace\"" >       <layout xsi:type=\""CsvLayout\"" >         <column name=\""shortdate\"" layout=\""${shortdate}\"" />         <column name=\""time\"" layout=\""${time}\"" />         <column name=\""logger\"" layout=\""${logger}\""/>         <column name=\""level\"" layout=\""${level}\""/>         <column name=\""machinename\"" layout=\""${machinename}\""/>         <column name=\""processid\"" layout=\""${processid}\""/>         <column name=\""threadid\"" layout=\""${threadid}\""/>         <column name=\""threadname\"" layout=\""${threadname}\""/>         <column name=\""message\"" layout=\""${message}\"" />         <column name=\""exception\"" layout=\""${exception:format=tostring}\"" />        </layout>     </target>   </targets>    <rules>     <!-- add your logging rules here -->     <logger name=\""*\"" minlevel=\""Trace\"" writeTo=\""trace\"" />     <!--     <logger name=\""*\"" minlevel=\""Trace\"" writeTo=\""f\"" />     -->   </rules> </nlog> 

Windows Azure diagnostics saves the diagnostics info as a CSV file in the blob storage. The CSV file has the below columns.

date level applicationName instanceId eventTickCount eventId pid tid message activityId 

However the entire NLog message is written in the Message column. This is probably because it saves the Diagnostics.Trace message there in which NLog is saving it'slogs. For example:

2014-05-07T12:18:49 Information KarlCassarTestAzure1 10cd67 635350619297036217 0 2984 1 \""2014-05-07 12:18:49.6254 TestAzureWebApplication1.MvcApplication Info RD0003FF410F59 2984 1  Application_Start \""  

The NLog message is the below:

\""2014-05-07 12:18:49.6254 TestAzureWebApplication1.MvcApplication Info RD0003FF410F59 2984 1  Application_Start \"" 

It is escaped and fits entirely in the CSV column which I wouldn't wish. Any idea if there is something to do about this?

""",azure-storage +55299880,"Azure functions TypeScript decorator

Recently Azure functions released support for TypeScript:

import { AzureFunction  Context  HttpRequest } from '@azure/functions';  @some_decorator - ??? const httpTrigger: AzureFunction = async function (context: Context   req: HttpRequest): Promise<void> {         }  export default httpTrigger; 

I'm looking for a good approach to implement a pre-function call. For instance the pre-function could do authorization checks or whatever else is necessary before the function in question is executed.

I'm wondering which TypeScript decorators would be the best and cleanest option but not sure about the implementation.

""",azure-functions +52320153,"Unable to install the DependencyAgent Azure VM extension

I have an environment with about a dozen VMs each of which has the Microsoft Monitoring Agent reporting to a central OMS Workspace. In addition these VMs have the DependencyAgent installed.

Five of the VMs have the DependencyAgent extension reporting their state on the portal as \Provisioning succeeded\"" and I can see them in the Service Map workspace solution in Log Analytics. However for some reason the other six show the extension as \""Transitioning\"".

When I log into one of those VMs and view the logs for the extension in C:\\WindowsAzure\\Logs\\Plugins\\Microsoft.Azure.Monitoring.DependencyAgent.DependencyAgentWindows\\9.6.2.1366

I see:

Execution Output: Start-Service : Failed to start service 'Microsoft Dependency Agent (MicrosoftDependencyAgent)'

I try to manually start the service but get \""The Microsoft Dependency Agent service on Local Computer started an then stopped.\"" and in the Event Viewer I see \""The Microsoft Dependency Agent service entered the stopped state.\""

Any idea what I could possibly be doing wrong or where I can get additional logs?

""",azure-virtual-machine +39856227,"Unable to retrieve image blob with encoded & (ampersand) in SAS part of url

I have image blob url in following form: https://my-storage.blob.core.windows.net/my-container/my-virtual-directory/image-name.png?sv=2015-12-11&sr=b&si=my-policy&sig=ZiKivSYXr63vBtdY7IsxVQ01WmrFnK%2FC9xABVrho6sY%3D&se=2016-10-04T15%3A37%3A11Z

I use this image source in my html editor inside tag . Problem is when url get encoded & is replaced with & and after my image is not available from that url. I tried doing it inside browser directly (replacing any & to &) and it returns response \""Server failed to authenticate the request. Make sure the value of Authorization header is formed correctly including the signature.\""

Why is this happening? What might be the solution?

""",azure-storage +54686530,"How to check the aks api in Azure Devops Pipeline

I have a simple Nodejs API where have a /health path to valid the result \OK\""

How can i check the /health result within the Azure Devops release pipeline?

My pipeline will pull the artifact from the Azure Container registry and deploy to aks then promote the service in Ingress and API management to access by the public.

""",azure-devops +55644026,Azure function - should context.done be called after each loop or at end

In an Azure function say I have:

const cosmosDBTrigger: AzureFunction = async function (context: Context  documents: any[]): Promise<void> {     if (!!documents && documents.length > 0) {         documents.forEach(function (document) {             context.bindings.outputdocuments = document             //1 - SHOULD IT GO HERE         });      } //2 - SHOULD IT GO HERE } 

Is the correct place to place context.done be in position 1 or 2. Namely should be after each document in the loop at the very end?

Thanks.

,azure-functions +52601259,"How do we trigger a build only for tags matching a pattern

I want to create a build script specifically when I push a tag pattern on git (not a branch).

But I cannot find it in the

I'm looking specifically at \""pattern\"" rather than a static string

""",azure-devops +14169539,Proper CloudTableClient instance lifecycle?

I'm using the new WindowsAzure.Storage 2.0 (might not be a revelant information) and I'm implementing data access using CloudTableClient. Most samples I've seen are instanciating a CloudTableClient in the ctor of an ASP MVC Controller (instanciated per web request). Is there any performance penalty doing so? Would it be wise to keep a long running instance in a singleton style?

,azure-storage +16199683,Optimize Windows Azure Table Storage?

I have about 28GB of Data-In for a little over 13.5 million rows stored in Windows Azure Table Storage.

6 Columns all ints except 1 decimal and 1 datetime. Partition Key is about 10 characters long. RowKey is a guid.

This is for my sanity check--does this seem about right?

The Sql Database I migrated the data from has WAY more data and is only 4.9GB.

Is there a way to condense the size? I don't suspect renaming properties will put a huge dent on this.

*Note this was only a sampling of data to estimate costs for the long haul.

,azure-storage +53105735,"Testing Azure Function With Local Url Error

I am building an Azure function and just updated to latest everything this is my code:

  log.Info($\C# Timer trigger function executed at: {DateTime.Now}\"");    var localCall = httpClient.GetAsync(urls[EnvironmentNames.LOCAL]);   var localCallResult = await localCall;    log.Info($\""{EnvironmentNames.LOCAL} call status code {localCallResult.StatusCode}\""); 

The url is:

{EnvironmentNames.LOCAL $\""local.mysite.net:5050/doThings\""}

I am getting the following error when testing locally in Visual Studio:

System.ArgumentException: Only 'http' and 'https' schemes are allowed.

Parameter name: requestUri

Can I not test Azure calls locally?

""",azure-functions +32083889,"Not able to check in project to Visual Studio Online due to missing files

I'm trying to check in an existing project to Visual Studio Online (VSO) for the first time but I keep getting errors due to missing files. These files appear to be items that we removed. I keep adding them to \Exclude\"" list but there are too many of them e.g. simple PNG images JS files etc.

I'm not sure where the \""inventory\"" of files are kept in Visual Studio 2015.

Is there a quick and easy way to remove these missing files' records from Visual Studio so that I can check my project into VSO?

""",azure-devops +49501542,"Azure Create Container

After creating an Azure Storage account using Python SDK when proceeding to create a storage blob container its throws the following error : ResourceNotFoundThe specified resource does not exist. RequestId:508e8df1-301e-004b-224e-c5feb1000000 Time:2018-03-26T22:03:46.0941011Z

Code snippet:

def create_blob_container(self  storage_account_name  storage_account_key  version):     try:         account = BlockBlobService(account_name = storage_account_name  account_key = storage_account_key.value)                   container_name = \dummy container\""         container = account.create_container(container_name)         if container:             print 'Container %s created' %(container_name)         else:             print 'Container %s exists' %(container_name)     except Exception as e:         print e 

Does anybody have any idea how to go about this?

P.S.: I am waiting for provisioning state of storage account to be succeeded and then proceeding to create a container.

Any help is greatly appreciated.

""",azure-storage +38951062,"Unable to execute IIS Administration commands on remote machine from Visual Studio Team Services (was Visual Studio Online) Build

I have a set of power-shell scripts for managing the entire deployment. We migrated our entire codebase to Visual Studio Team Services (previously VS Online) and I am trying to get the entire deployment automation.

The steps I am following on high level are :

  • Restore the packages and build the solutions
  • Package the artifacts required in to a single folder (includes binaries scripts dacpac etc)
  • Copy the package to a azure vm using azure file copy
  • Execute the scripts on target machine

\""Build

The issue I am facing is - none of the IIS Administration commands are executing on the remote machine. e.g. Remove-WebSite/Remove-WebAppPool are not working.

I do not see any error also being thrown by these commands.

Is there anything specific which needs to be enabled to run these commands.

NOTE : I am able to get the same scripts working fine when I run from the server directly. The issue is only when I am using the run 'powershell on target machine' on build steps of Team Services.

""",azure-devops +42645408,Rails and Azure VM: where to set environment variables (mailer)

I have a Rails app hosted on an Azure virtual machine.

Every mailer tutorial recommends to store your domain and settings for incoming and outgoing emails in environment variables.

Is it safe then just to set them as normal linux env variables in the virtual machine? Or better use the Azure portal?

,azure-virtual-machine +53964647,"Azure DevOps: create a comment on behalf of another user

I'm looking for a way to add a comment to a work item on behalf of another user (impersonate another user).

        VssConnection connection = new VssConnection(new Uri(url)  new VssClientCredentials());         WorkItemTrackingHttpClient client = connection.GetClient<WorkItemTrackingHttpClient>();          patchDocument.Add(             new JsonPatchOperation()             {                 Operation = Operation.Add                  Path = \/fields/System.History\""                  Value = \""Sample comment 1\""             }         );          await client.UpdateWorkItemAsync(patchDocument  id); 
""",azure-devops +52798199,"Access Is Denied error pushing package into my azure devops artifacts nuget feed

In my question here I managed to push one package to my feed however I now get an Access is denied error.

According to this question I should be prompted for user name and password. This did occur for the package I managed to push but it no longer occurs.

What do I need to do?

Studying the docs

this issue on Git is relevant

[Update]

Something strange has happened to my folder. I get the error if I just type Nuget at the dos prompt. If I create a new folder and extract nugetcredentialprovider.zip into it then run Nuget I don't get the error

the cli reference

push reference

configuring nuget behaviour

""",azure-devops +55236470,mapreduce job on yarn exited with exitCode: -1000 beacuse of resource changed on src filesystem
    Application application_1552978163044_0016 failed 5 times due to AM Container for appattempt_1552978163044_0016_000005 exited with exitCode: -1000 

Diagnostics:

java.io.IOException: Resource abfs://xxx@xxx.dfs.core.windows.net/hdp/apps/2.6.5.3006-29/mapreduce/mapreduce.tar.gz changed on src filesystem (expected 1552949440000 was 1552978240000 Failing this attempt. Failing the application.

,azure-storage +55685233,I want to integrate oauth code grant flow for and spa and back end is hosted on azure functions

I am working on an SPA in angular and azure functions as back end. and using Azure active directory for single sign on. I have implemented Implicit Flow and it works fine. but the issue is that I do not want to expose the access_token in the browser.

I want to implement code grant flow with PKCE validation. I need recommendations how to do it properly in azure functions.

,azure-functions +43610911,"Azure Web App web.config Rewrite not working

I use Azure Web App as a reverse proxy. When I typed URL like \www.ABC.com\"" it was working. If I change to \""www.ABC.com/abc\"" it wasn't working. Then it will change my URL to the \""www.ABC.com/abc\"". I just want to keep this URL on Azure Web App like \""http://ABC.azurewebsites.net\"". Does anybody know how to solve this problem?

 <rule name=\""Proxy\"" stopProcessing=\""true\"">         <match url=\""(.*)\"" />         <action type=\""Rewrite\"" url=\""https:/www.ABC.com/abc/{R:1}\"" />         <serverVariables>             <set name=\""HTTP_X_UNPROXIED_URL\"" value=\""https:/www.ABC.com/abc/{R:1}\"" />              <set name=\""HTTP_X_ORIGINAL_ACCEPT_ENCODING\"" value=\""{HTTP_ACCEPT_ENCODING}\"" />              <set name=\""HTTP_X_ORIGINAL_HOST\"" value=\""{HTTP_HOST}\"" />             <set name=\""HTTP_ACCEPT_ENCODING\"" value=\""\"" />         </serverVariables>     </rule> 
""",azure-web-app-service +46678070,"SQL Lite on Azure App Service - Inserts Slow and Timeout

We have a process that needs to create a sql lite database with a couple tables with about 750k records/100mb. It gets uploaded somewhere else (Azure Storage Blob). I know Azure App's have very slow disk I/O and when running this locally it takes a few seconds it always times out on Azure. I've tried setting the WEBSITE_LOCAL_CACHE_OPTION to always and using the temp folder but that didn't help.

I looked into using a sql lite in memory database but there seems to be no way to avoid the file system if I want to convert that into a byte array (or stream) which is slow in an azure app. Ideally getting access to the in memory database to stream to a blog would be best case scenario.

Are there any tweaks in sql lite or in the azure app service that would allow this to finish in a reasonable amount of time?

Using service stack's ormlite. Here is an example:

 using (var trans = dba.OpenTransaction( System.Data.IsolationLevel.ReadUncommitted))                 {                     dbLite.InsertAll(locs);                     foreach (var s in sales)                     {                         dbLite.Insert<Sales>(s);                     }                      trans.Commit();                 } 

Interesting enough I got the time down from never working (10 minutes it has written 5mb so I know it will never finish) to 4-5 minutes with

dbLite.ExecuteSql(\pragma page_size = 8192\""); dbLite.ExecuteSql(\""pragma synchronous = OFF\""); dbLite.ExecuteSql(\""PRAGMA journal_mode = OFF\""); 

This is compared to 1 second locally. The synchronous mode set to off seems to help the most in my scenario.

""",azure-web-app-service +39244265,"Azure web app redirect http to https

I use Azure cloud with web app and my server side written on nodejs. When web app receive a http request I want to redirect the request to https I found the solution. I put that to my web.config file inside the rules tag

        <rule name=\Force HTTPS\"" enabled=\""true\"">           <match url=\""(.*)\"" ignoreCase=\""false\"" />           <conditions>             <add input=\""{HTTPS}\"" pattern=\""off\"" />           </conditions>           <action type=\""Redirect\"" url=\""https://{HTTP_HOST}/{R:1}\"" appendQueryString=\""false\"" redirectType=\""Permanent\"" />         </rule> 

The problem is when I type in the browser \""https://myURL.com\"" it redirect to main screen every thing ok but when I change https to http \""http://myURL.com\"" it redirect to https://myURL.com/\"" and add to the url \""bin/www\"" according that the url looks like that \""http://myURL.com/bin/www\"" the response is: page doesn't find.

The question is how to redirect a clear url without added data to the url?

Part of my web.config file:

<rewrite>       <rules>         <!-- Do not interfere with requests for node-inspector debugging -->         <rule name=\""NodeInspector\"" patternSyntax=\""ECMAScript\"" stopProcessing=\""true\"">           <match url=\""^bin/www\\/debug[\\/]?\"" />         </rule>         <!-- First we consider whether the incoming URL matches a physical file in the /public folder -->         <rule name=\""StaticContent\"">           <action type=\""Rewrite\"" url=\""public{REQUEST_URI}\"" />         </rule>         <!-- All other URLs are mapped to the node.js site entry point -->         <rule name=\""DynamicContent\"">           <conditions>             <add input=\""{REQUEST_FILENAME}\"" matchType=\""IsFile\"" negate=\""True\"" />           </conditions>           <action type=\""Rewrite\"" url=\""bin/www\"" />         </rule>         <!-- Redirect all traffic to SSL -->          <rule name=\""Force HTTPS\"" enabled=\""true\"">           <match url=\""(.*)\"" ignoreCase=\""false\"" />           <conditions>             <add input=\""{HTTPS}\"" pattern=\""off\"" />           </conditions>           <action type=\""Redirect\"" url=\""https://{HTTP_HOST}/{R:1}\"" appendQueryString=\""false\"" redirectType=\""Permanent\"" />         </rule>       </rules>     </rewrite>     <!-- 'bin' directory has no special meaning in node.js and apps can be placed in it -->     <security>       <requestFiltering>         <hiddenSegments>           <remove segment=\""bin\"" />         </hiddenSegments>       </requestFiltering>     </security> 

Thanks for answers Michael.

""",azure-web-app-service +42851059,Strategies to encrypt on Azure without using KeyVault

Need to store some content in Azure Blob Storage and want to encrypt prior to storing it on Azure Blob (we don't want to rely on Azure storage encryption on-rest). The issue is we do not want to store our encryption keys on Azure (e.g. Key vault) and store it outside of Azure. Any suggestion on strategies for achieving this?

,azure-storage +55244735,"Host Serial Port forward to Azure VM Client

I am writing a barcode program using a VM in Azure. The software on my local machine emulates a serial port using a USB port.

Is there a way to forward data from for example comm port 3 to the azure VM.

******* edited 3/21/19 - in response to my realization and SumanthMarigowda-MSFT

\""enter

but in the Azure VM I am only seeing com1 and com2:

\""enter

Gina

""",azure-virtual-machine +54755278,How to set up Azure IP alias directly using DNS name?

I'm setting up my website on Azure service. My DNS zone is 'xxx.io' (for example). I can create address such as 'main.xxx.io' or 'web.xxx.io' using Alias record sets and they work well. But I can't access the website directly using 'xxx.io' as address. How do I achieve this?
PS: my colleague says it used to work but now it doesn't and he doesn't know how either.

,azure-web-app-service +52111269,"How to backup Azure Cosmos DB with Azure Function App

Problem:

I try to build a backup solution for Azure Cosmos DB that gives us DB dumps on a regular basis in case we programmatically corrupt the data in our database. The issue is that Data Factory does not (yet) exist for Azure Germany and we cannot rely on the automatic backups from Azure (that are only available for 8 hours). I do not want to use any extra applications outside the cloud.

What I found so far:

https://www.npmjs.com/package/mongo-dump-stream

Mongo Dump Stream should be able to connect to our DB and read from it.

My idea is to use this npm from within Azure Functions and send the result of the dump to a blob storage.

My question:

How can I send the result to a blob storage?

Can you give an example for concrete implementation?

""",azure-functions +43990866,"How to check if an Azure VM is running at a specific time via some kind of alert?

I would like to know how to create an alert for an Azure VM which tells me if the server(s) is running at a specific time.

The scenario: Servers for the Azure network need to start at 7:30am to be ready for the users as they shut down at 7:30pm each day to save $$. Today the azure automation script could not find any vms for the resource group! So that meant the servers where not started. I want to create an alert that will only tell me if the server(s) are not running at say 7:45am. So I can start them. (Running the script now does find all of the servers now but didn't before for some reason... maybe Azure was moving the vms in the resource group?)

I have looked at: - Microsoft Operations Management Suit > Log Search > Add Alert Rule. - Resource Manager > Virtual Machines > Monitoring > Alert Rules > Add metic alert & Add activity log alert. But I can't see where to only run the alert at a specific time.

Update/Edit: Script used:

param (      [Parameter(Mandatory=$false)]       [String]$AzureCredentialAssetName = 'AzureCred'        [Parameter(Mandatory=$false)]       [String]$AzureSubscriptionIDAssetName = 'AzureSubscriptionId' )   # Setting error and warning action preferences  $ErrorActionPreference = \SilentlyContinue\""  $WarningPreference = \""SilentlyContinue\""   # Connecting to Azure  $Cred = Get-AutomationPSCredential -Name $AzureCredentialAssetName -ErrorAction Stop  $null = Add-AzureAccount -Credential $Cred -ErrorAction Stop -ErrorVariable err  $null = Add-AzureRmAccount -Credential $Cred -ErrorAction Stop -ErrorVariable err   # Selecting the subscription to work against  $SubID = Get-AutomationVariable -Name $AzureSubscriptionIDAssetName  Select-AzureRmSubscription -SubscriptionId $SubID   # Getting all resource groups  $ResourceGroup = \""Servers\""  # Getting all virtual machines  $RmVMs = (Get-AzureRmVM -ResourceGroupName $ResourceGroup -ErrorAction $ErrorActionPreference -WarningAction $WarningPreference).Name   # Managing virtual machines deployed with the Resource Manager deployment model \""Loop through all VMs in resource group $ResourceGroup.\"" if ($RmVMs)  {      foreach ($RmVM in $RmVMs)      {          \""`t$RmVM found ...\""          $RmPState = (Get-AzureRmVM -ResourceGroupName $ResourceGroup -Name $RmVM -Status -ErrorAction $ErrorActionPreference -WarningAction $WarningPreference).Statuses.Code[1]         if ($RmPState -eq 'PowerState/deallocated')          {              \""`t$RmVM is starting up ...\""              $RmSState = (Start-AzureRmVM -ResourceGroupName $ResourceGroup -Name $RmVM -ErrorAction $ErrorActionPreference -WarningAction $WarningPreference).IsSuccessStatusCode               if ($RmSState -eq 'True')              {                  \""`t$RmVM has been started.\""              }              else              {                  \""`t$RmVM failed to start.\""              }          }                    }  }      else {     \""No VMs for $ResourceGroup deployed with the Resource Manager deployment model.\""   } \""Runbook Completed.\"" 

I just want a fail safe to know if the servers are not running when they should be.

Expected output:

Loop through all VMs in resource group Servers.      DOMAINCONTROLLER found ...      SQLSERVER found ...      GATEWAY found ...      APPLICATIONHOST found ...  Runbook Completed. 

instead of:

Loop through all VMs in resource group Servers.  No VMs for Servers deployed with the Resource Manager deployment model.  Runbook Completed. 

I.e. rerunning the same script manually gave expected results.

""",azure-virtual-machine +17216584,Is it possible to have a webservice over an Azure Servicebus?

I have a virtual machine on Azure which will listen to messages over the servicebus of Azure. And another developer needs to connect to this servicebus to send messages to my service. To do so we need to come up with some protocol for this communication system. And I was thinking about using WSDL to make the server something webservice-like but instead of listening to the standard HTTP ports it would connect to the service bus and within it a topic with subscription or whatever. I'm still not sure what would be best.

So is it possible? Has anyone done something similar before? Are there some examples?

,azure-virtual-machine +56560727,"Dependency Injection for Azure Functions V1 DI

I have a new issue regarding the dependency injection in azure function v1.

Actual situation:

I have an azure function V1 http triggered in which I want to reference my business services the way I can use my services without reinventing the wheel. I searched on the internet and I found this interesting article from Microsoft.

However it seems like it only works with azure function v2 (.net core) because whenever I try to install the Microsoft.Azure.Functions.Extensions I always get the following error :

NU1107 Version conflict detected for Microsoft.Azure.WebJobs. Install/reference Microsoft.Azure.WebJobs 3.0.5 directly to project FunctionApp2365431 to resolve this issue. FunctionApp2365431 -> Microsoft.Azure.Functions.Extensions 1.0.0 -> Microsoft.Azure.WebJobs (>= 3.0.5) FunctionApp2365431 -> Microsoft.NET.Sdk.Functions 1.0.28 -> Microsoft.Azure.WebJobs (>= 2.2.0 && < 2.4.0)

Following a comparison between the dlls of two project (one in .net core (in which i could implement DI) and the other in Net framework 461)

\""Comparison\""

You can see that versions are different the .net core V2 azure function is 3.5 and the V1 is 2.2

I tried to reference/install manually the package version as asked in the error and I was asked to update NewtonSoft.Json package too I did that too and I could after that force the installation of the Microsoft.Azure.Functions.Extensions BUT it broke the project and I couldn’t stop getting errors.

Here is the build result after doing the steps mentioned above:

\""Build

My question here is How can I setup DI in Net framework project eg. For Azure functions V1?

Then Why there is only documentation for azure function V2 to setup DI?

Are V1 deprecated or is Microsoft not supporting the V1 azure functions anymore?Because this is weird!

Thanks in advance

EDIT : My question is not a duplicate of this StackOverFlow thread because it's a complicated way and outdated comparing to the solution that Microsoft is providing for v2 azure functions.

I also watched this interesting video (still didn't test it) and the only problem here is that i want to use something provided by Microsoft (official) as for az func v2 and not the package that he has developed.

""",azure-functions +49851795,Move files from Azure Storage blob to an Ftp server

I need to upload a few files from Azure Storage to an external Ftp server.

Is there any way with Azure to uplodad these files directly without download them before ?

,azure-storage +19583608,"Azure Storage browsing client software options

As far as I know these are the only clients available to browse Azure Table Storage Queues and Blobs:

Are there any other clients out there that I am unaware of?

""",azure-storage +56855931,push updates from azure to github

i am new to azure devops and facing problem to integrate between azure devops and github may be you could help. my question is how can i push commits that are done on the azure devop repo to corresponding repo which resides on my github account?

For example: 1)i import a file abc.py from github private repo 2)i make changes to abc.py in azure devops repo and commits it. 3)now all the commits i made to abc.py on master branch of azure repo should be pushed to abc.py of master branch in my private github repo from where it was previously imported.

thanks for your help.

,azure-devops +45354889,"clustering node on iisnode using nodeProcessCountPerApplication

I have a web app in Azure which is using node.js and socket.io and I decided to use the clustering supported by IISNODE using nodeProcessCountPerApplication as below in my web.config

<iisnode nodeProcessCountPerApplication=\0\"" /> 

However when I apply this I got 500.1013 internal server error which states:

Most likely causes: IIS received the request; however an internal error occurred during the processing of the request. The root cause of this error depends on which module handles the request and what was happening in the worker process when this error occurred. IIS was not able to access the web.config file for the Web site or application. This can occur if the NTFS permissions are set incorrectly. IIS was not able to process configuration for the Web site or application. The authenticated user does not have permission to use this DLL. The request is mapped to a managed handler but the .NET Extensibility Feature is not installed.

I looked for examples but couldn't find anything similar. I am wondering what I am doing wrong here. I want to be able to use all processors of my machine. Thanks !

""",azure-web-app-service +52446562,Is there a way do undo pipeline deletion in VSTS?

After a release pipeline is deleted is there a way to undo that deletion? One of our critical pipelines was deleted and while we had backed up the definition it would be nice to know if Azure DevOps has the undo functionality built in.

,azure-devops +57035835,"Azure Functions: how to efficiently send batch of messages to Service Bus using bindings?

I'm working with Azure Functions 2.0 and trying to send a collection of messages to Azure Service Bus Topic. Currently I'm using IAsyncCollector<T> as output binding :

[FunctionName(\MyFunction\"")] public static async Task MyFunction([ServiceBus(MyTopicName  EntityType.Topic  Connection = ServiceBusConnection) outTopic]) {     var messages = await GetMessages();     foreach(var message in messages)     {         await outTopic.AddAsync(message);     } } 

Such approach is pretty convenient (declarative code no boilterplate) but brings one significant issue: for large number of messages the performance is unacceptable. Is there any other recommended way to work with large batches that need to be sent to Azure Service Bus ?

""",azure-functions +43946750,"duplicate herocard response in proactive bot

I want to display an api response as a card but the response duplicates - what am I doing wrong is this a valid use of for hero card?

bot.dialog('/receipt'  [     function(session){         bot.on('trigger'  function (message) {         var queuedMessage = message.value;         var msg = new builder.Message()              .address(queuedMessage.address)              .attachments([                 new builder.HeroCard(session)                     .title(\Good news - I can book this for you:\"")                     .subtitle(\""Customer: \"" + queuedMessage.name)                     .images([                         builder.CardImage.create(session  \""\"")                     ])                     .buttons([                         builder.CardAction.dialogAction(session  \""bookIt\""  \""\""  \""Book it?\"")                          builder.CardAction.dialogAction(session  \""help\""  \""\""  \""Start again?\"")             ])             ]);             session.send(msg);         })     }    ]); 
""",azure-functions +44898939,In VSTS How to constraint/restrict a team member's working capacity in case working in multiple projects

Assume a team member A works with multiple projects/applications(P1 P2 P3). While doing Capacity planning i'm able to allocate him with 8hrs per day in each project(P1 P2 P3). Is there any availability to freeze/constraint a team member's capacity to 8hrs in all projects combined(capacity should be restricted to 8hrs in combined). Any provision to create such a rule/restriction on account level?

,azure-devops +57477342,"Azure function to convert encoded json IOT Hub data to csv on azure data lake store

I have simulated devices which is sending messages to IoT Hub blob storage and from there I am copying data(encoded in JSON format) to Azure Data Lake Gen2 by creating a pipeline using Azure Data Factory.

How to convert these json output file to CSV file using Azure function and store it again on Azure Data Lake Store.


@Adam

Thank You so much for your all the answer and I implemented those successfully in my azure account. But this does not actually given me the desired output which I was looking for.

Hope this make things clear and my requirement.

My Input file which sent to IOT Hub is:-

\""enter

Below is the sample records of the data which is stored in IOT Hub endpoints(Blob Storage):- (Json - Set of Objects):-

{\""EnqueuedTimeUtc\"":\""2019-08-06T10:46:39.4390000Z\"" \""Properties\"":{\""$.cdid\"":\""Simulated-File\""} \""SystemProperties\"":{\""messageId\"":\""d48413d2-d4d7-41bb-9470-dc0483466253\"" \""correlationId\"":\""a3062fcb-5513-4c09-882e-8e642f8fe38e\"" \""connectionDeviceId\"":\""Simulated-File\"" \""connectionAuthMethod\"":\""{\\\""scope\\\"":\\\""device\\\"" \\\""type\\\"":\\\""sas\\\"" \\\""issuer\\\"":\\\""iothub\\\"" \\\""acceptingIpFilterRule\\\"":null}\"" \""connectionDeviceGenerationId\"":\""637001643970703748\"" \""contentType\"":\""UTF-8\"" \""enqueuedTime\"":\""2019-08-06T10:46:39.4390000Z\""} \""Body\"":\""eyIiOiI1OCIsInJvdGF0ZSI6IjQ2Mi4wMjQxODE3IiwiZGF0ZXRpbWUiOiIxLzMvMjAxNSAxNjowMCIsIm1hY2hpbmVJRCI6IjEiLCJ2b2x0IjoiMTU2Ljk1MzI0NTkiLCJwcmVzc3VyZSI6IjEwNi4zNDY3MTc5IiwidmlicmF0aW9uIjoiNDguODIwMzAwODYifQ==\""}

{\""EnqueuedTimeUtc\"":\""2019-08-06T10:46:40.5040000Z\"" \""Properties\"":{\""$.cdid\"":\""Simulated-File\""} \""SystemProperties\"":{\""messageId\"":\""9da638d9-fdba-41d3-86df-3ea6cedc44e7\"" \""correlationId\"":\""aeb20305-6fee-4a59-9053-5fa1d0c780a9\"" \""connectionDeviceId\"":\""Simulated-File\"" \""connectionAuthMethod\"":\""{\\\""scope\\\"":\\\""device\\\"" \\\""type\\\"":\\\""sas\\\"" \\\""issuer\\\"":\\\""iothub\\\"" \\\""acceptingIpFilterRule\\\"":null}\"" \""connectionDeviceGenerationId\"":\""637001643970703748\"" \""contentType\"":\""UTF-8\"" \""enqueuedTime\"":\""2019-08-06T10:46:40.5040000Z\""} \""Body\"":\""eyIiOiI1OSIsInJvdGF0ZSI6IjQyOS44MjIxNDM1IiwiZGF0ZXRpbWUiOiIxLzMvMjAxNSAxNzowMCIsIm1hY2hpbmVJRCI6IjEiLCJ2b2x0IjoiMTY0LjE0ODA5NDYiLCJwcmVzc3VyZSI6IjEwNC41MzIxMjM2IiwidmlicmF0aW9uIjoiNDMuNzg4NjgxNTUifQ==\""}

**The Json \""Body\"" field contains the actual IOT device data which is encoded in JSON format with some system and message properties.

**By creating pipeline to JSON to CSV does not extract the actual data to Data Lake Store.

Output CSV is exact copy of the JSON file(real data is not extracted)after running the ADF pipeline.

[![enter image description here][2]][2]

1: https://i.stack.imgur.com/iCBvi.png

[2]: https://i.stack.imgur.com/HaBMn.png

using System.Net; using System.Text; using Microsoft.Azure.WebJobs; using Microsoft.Extensions.Logging;

namespace BlobTrigger { public static class ExtractJson { [FunctionName(\""BlobTriggerCSharp\"")]
public static void Run( [BlobTrigger(\""blobcontainer/{name}\"")] Stream myBlob [Blob(\""bloboutput/{name}\"") FileAccess.Write] Stream outputBlob string name ILogger log) { // var len = myBlob.Length; // myBlob.CopyTo(outputBlob);

        // parse JSON with JsonConvert          var jsonObject = JObject.Parse(myBlob);          var frequency = jsonObject.SelectToken(\""metadata.frequency\"").ToString();          var rssi = jsonObject.SelectToken(\""metadata.gateways[0].rssi\"").ToString();          foreach (var p in jsonObject) // properties         {             Console.WriteLine(p.Key);  // eg. port             Console.WriteLine(p.Value.Type); // eg. integer         }          var port = Convert.ToInt32(jsonObject.SelectToken(\""port\"").ToString());         // parse using base64 decode from SystemProperties.contentType field         var base64EncodedBytes = System.Convert.FromBase64String(SystemProperties.contentType);         return System.Text.Encoding.UTF8.GetString(base64EncodedBytes);          // write parsed output to outputBlob stream      } } 

}

\""enter

""",azure-functions +19599819,"Azure Blob 400 Bad request on Creation of container

I'm developing an ASP.Net MVC 4 app and I'm using Azure Blob to store the images that my users are going to upload. I have the following code:

 var storageAccount = CloudStorageAccount.Parse(ConfigurationManager.ConnectionStrings[\StorageConnection\""].ConnectionString);   var blobStorage = storageAccount.CreateCloudBlobClient();  //merchantKey is just a GUID that is asociated with the merchant  var containerName = (\""ImageAds-\"" + merchant.merchantKey.ToString()).ToLower();  CloudBlobContainer container = blobStorage.GetContainerReference(containerName);  if (container.CreateIfNotExist())     {        //Upload the file     }  

as soon as the if statement is excecuted I'm getting the following exception:

  {\""The remote server returned an error: (400) Bad Request.\""} 

I thought it was the container's name but I don't see anything wrong with it. The connection string seems to create a good storage with all details for the blob. I'm at a loss. I've researched the web and everyone is saying it's a naming problem but I can't find anything wrong with it.

Test Container name that I used: imageads-57905553-8585-4d7c-8270-be9e611eda81

The Container has the following uri: {http://127.0.0.1:10000/devstoreaccount1/imageads-57905553-8585-4d7c-8270-be9e611eda81}

UPDATE: I have changed the container name to just image and I still get the same exception. also the development connection string is as follows: <add name=\""StorageConnection\"" connectionString=\""UseDevelopmentStorage=true\"" />

""",azure-storage +53443880,"CallerFilePathAttribute not returning file path with valid directory separators on azure's linux container app service

I have the following method in a netcore2.1 web app:

public static void Information(string message  [CallerFilePath] string filePath = \\"") {     var fileNameWithoutExtn = Path.GetFileNameWithoutExtension(filePath);     . . . } 

When running on azure app service (windows host) it behaves as expected:

filePath = C:\\web\\src\\production\\MyWebsite\\Controllers\\ChallengeController.cs

fileNameWithoutExtn = ChallengeController



But when I run this on azure's linux container app service:

filePath = C:\\web\\src\\production\\MyWebsite\\Controllers\\ChallengeController.cs

fileNameWithoutExtn = C:\\web\\src\\production\\MyWebsite\\Controllers\\ChallengeController

And

Path.DirectorySeparatorChar = /

Path.AltDirectorySeparatorChar = /

Path.PathSeparator = :

Path.VolumeSeparatorChar = /

Why is CallerFilePath giving me a path which does not match with DirectorySeparatorChar or AltDirectorySeparatorChar ?

PS: I posted the same in msdn forum but did not get any response hence posting here. I will update here if I hear there.

""",azure-web-app-service +50275510,"How Often Is The Azure Functions Beta Host Updated on the Azure Portal

I'm using the Azure Functions v2 in a project that I've started recently and I'm trying to use a proxy to redirect a request to /.well-known to one of my functions.

Proxies do not work on Azure Functions v2 (as stated here). They have now been fixed in release v2.0.11737-alpha but the Azure Portal is still using v2.0.11651.0

Does anyone know how long it takes after a beta release is created in GitHub for it to be available in the Azure portal?

""",azure-functions +36064904,"How to run Azure VM CustomScriptExtension as domain user? (part 2)

Updated to explain my root problem: If Azure has extensions for VM's as they are being provisioned to join a domain and to run scripts how can I run a script as a domain user?

The script needs to be run as a domain user in order to access a file share to retrieve installation files and other scripts that are neither part of the VM template image nor can (reasonably) be uploaded to Azure blob storage and downloaded as part of provisioning.

I split this question in two because the 2nd half (represented here) didn't get solved.

What I have working is a Powershell script that takes a JSON file to create a new VM; the JSON file contains instructions for the VM to join a domain and run a custom script. Both things do happen but the script runs as the user workgroup\\system and therefore doesn't have access to a network drive.

  • How can I best provide a specific user's credentials for such a script?

I'm trying to have the script spawn a new Powershell session with the credentials of a different user but I'm having a hard time figuring out the syntax -- I can't even get it to work on my development workstation. Naturally security is a concern but if I could get this to work using encrypted stored credentials this might be acceptable.

... but don't limit your answers -- maybe there's an entirely different way to go about this and achieve the same effect?

Param(     [switch]$sudo  # Indicates we've already tried to elevate to admin     [switch]$su # Indicates we've already tried to switch to domain user )  try {      # Pseudo-constants     $DevOrProd=(Get-Item $MyInvocation.MyCommand.Definition).Directory.Parent.Name     $PsScriptPath = Split-Path -parent $MyInvocation.MyCommand.Definition     $pathOnPDrive = \""\\\\dkfile01\\P\\SoftwareTestData\\Azure\\automation\\$DevOrProd\\run-once\""     $fileScriptLocal = $MyInvocation.MyCommand.Source     $fileScriptRemote = \""$pathOnPDrive\\run-once-from-netdrive.ps1\""     # $filePw = \""$pathOnPDrive\\cred.txt\""     $fileLog=\""$PsScriptPath\\switch-user.log\""     $Myuser=\""mohican\""     $Myuserpass=\""alhambra\""     $Mydomainuser=\""mydomain\\$Myuser\""     $Mydomain=\""mydomain.com\""      # Check variables     write-output(\""SUDO=[$SUDO]\"")     write-output(\""SU=[$SU]\"")      # Functions     function Test-Admin {       $currentUser = New-Object Security.Principal.WindowsPrincipal $([Security.Principal.WindowsIdentity]::GetCurrent())       return ($currentUser.IsInRole([Security.Principal.WindowsBuiltinRole]::Administrator))     }      # Main     write-output(\""Run-once script starting ...\"")      # Check admin privilege     write-output(\""Checking admin privilege ...\"")     if (Test-Admin) {         write-output(\""- Is admin.\"")     } else {         write-output(\""- Not an admin.\"")         if ($sudo) {             write-output(\""  - Already tried elevating  didn't work.\"")             write-output(\""Run-once script on local VM finished.\"")             write-output(\""\"")             exit(0) # Don't return failure exit code because Azure will report it as if the deployment broke...         } else {             write-output(\""  - Attempting to elevate ...\"")             $arguments = \""-noprofile -file $fileScriptLocal\""             $arguments = $arguments +\"" -sudo\""             try {                 Start-Process powershell.exe -Verb RunAs -ArgumentList $arguments                 write-output(\""    - New process started.\"")             } catch {                 write-output(\""    - New process failed to start.\"")             }             write-output(\""Run-once script on local VM finished.\"")             write-output(\""\"")             exit(0) # The action will continue in the spawned process         }     }     write-output(\""Checked admin privilege ... [OK]\"")      # Check current user     write-output(\""Checking user account ...\"")     $hostname = $([Environment]::MachineName).tolower()     $domainname = $([Environment]::UserDomainName).tolower()     $thisuser = $([Environment]::UserName).tolower()     write-output(\""- Current user is \""\""$domainname\\$thisuser\""\"" on \""\""$hostname\""\"".\"")     write-output(\""- Want to be user \""\""$Myuser\""\"".\"")     if ($Myuser -eq $thisuser) {         write-output(\""  - Correct user.\"")     } else {         write-output(\""  - Incorrect user.\"")         if ($su) {             write-output(\""  - Already tried switching user  didn't work.\"")             write-output(\""Run-once script on local VM finished.\"")             write-output(\""\"")             exit(0) # Don't return failure exit code because Azure will report it as if the deployment broke...         } else {             write-output(\""  - Attempting to switch to user \""\""$Mydomainuser\""\"" with passwond \""\""$Myuserpass\""\"" ...\"")             # FIXME -- This does not work... :-(             $MyuserpassSecure = ConvertTo-SecureString $Myuserpass -AsPlainText -Force             $credential = New-Object System.Management.Automation.PSCredential $Mydomainuser  $MyuserpassSecure             $arguments = \""-noprofile -file $fileScriptLocal\""             $arguments = $arguments +\"" -sudo -su -Credential $credential -computername $hostname\""             try {                 Start-Process powershell.exe -Verb RunAs -ArgumentList $arguments                 write-output(\""    - New process started.\"")             } catch {                 write-output(\""    - New process failed to start.\"")             }             write-output(\""Run-once script on local VM finished.\"")             write-output(\""\"")             exit(0) # The action will continue in the spawned process         }     }     write-output(\""Checked user account ... [OK]\"")      # Run script from P: drive (finally!)     write-output(\""Attempting to run script from P: drive ...\"")     write-output(\""- Script file: \""\""$fileScriptRemote\""\""\"")     if (test-path $fileScriptRemote) {         write-output(\""Running script from P: drive ...\"")         $arguments = \""-noprofile -file $fileScriptRemote\""         try {             Start-Process powershell.exe -Verb RunAs -ArgumentList $arguments             write-output(\""    - New process started.\"")         } catch {             write-output(\""    - New process failed to start.\"")         }         write-output(\""Run-once script on local VM finished.\"")         write-output(\""\"")         exit(0) # The action will continue in the spawned process     } else {         write-output(\""- Could not locate/access script file!\"")         write-output(\""Ran script from P: drive ... [ERROR]\"")     }      write-output(\""Run-once script on local VM finished.\"")     write-output(\""\"")  } catch {     write-warning(\""Unhandled error in line $($_.InvocationInfo.ScriptLineNumber): $($error[0])\"")     write-output(\""ABEND\"")     write-output(\""\"") } 
""",azure-virtual-machine +53309468,TFS to Azure DevOps Migration

I am undergoing the process of going from a very old TFS to Azure DevOps. There is a consideration whether to use TFVC or GIT. I used the git tfs deep clone feature to create a repo and it was about 3 GB. Does that mean the repo is too large to use as a git repo? If I cannot logically break it into smaller repos does this mean I have to continue using TFVC instead?

,azure-devops +53559981,Event Hub Connection Pooling

I have an azure function that is receiving some payloads from a service doing some basic operations on it and then forwarding it to an event hub. The solution works fine but there are spikes in latency occurring quite often(every 10 minutes or so).

My initial assumption was that this is because of the high cost on creating an event hub connection so I proceeded to creating a simple pooling class that can create multiple resources. That has led to some improvements but because of the inconsistent nature of the stream I still face issues when there are spikes in usage.

Looking at the event hub logs I can see that the connections get closed after 5 minutes. Is there any way of keeping the connection alive for longer. The access method for the pool is FIFO so being able to keep the connections alive for just a small amount longer would allow me to cycle trough more of them and as a result be better prepared for the spikes in the stream. I have being going trough Microsoft's documentation for Event Hub and I can't see any setting or way of keeping the connection alive for longer.

Any help would be greatly appreciated.

,azure-functions +57512171,"azure virtual machine linux diagnostic agent unable to find its event hub

I'm tasked by my company to integrate \azure audit logs\"" for some of important services e.g virtual machine and azure activity logs.

I'm new to whole azure technology and i'm doing my best to get it done.

So far I have setup \""virtual machine linux diagnostic agent\"" using

https://docs.microsoft.com/en-us/azure/virtual-machines/extensions/diagnostics-linux

The output in json is given in link here

https://jmp.sh/Xbs1SXg

My objective is to get those events read / process by \""event hubs\"". In the JSON i see now mention of \""sinks\"" or \""eventhub\"" entry. I don't know where this data is shipped.

for e.g

\""sink\"": [     {         \""name\"": \""sinkname\""          \""type\"": \""EventHub\""          \""sasURL\"": \""https SAS URL\""     }      ... ] 

Where and how can i use this? because in my json i don't see this code block.

I have created a event-hub workspace and instance but I don't know how to connect the two.

I'm attaching some of errors I'm getting on \""export template tab\"".

export template error export template error 2

""",azure-virtual-machine +31415705,Windows Azure behind NATed router

I am working on a project and am attempting to run a FTP daemon on an azure vm running the Technical Preview 2. The Daemon reports that it is behind a NATed router and as such I can not connect in via another means but the remote desktop connection. (I will be running other daemons on this server as well and they also have this problem)

I need some way to access this router that my Azure server is behind to configure it to allow for the range of ports that i need to access.

The fine folks at MVA instructed me to ask here so here I am.

,azure-virtual-machine +48914632,Azure Application SQL Database slow spin up time

I've got a little bit problematic task at work maybe someone will be able to help.

We've got two identical applications running on Azure AppService. Both using Standard (S1) AppService both using S2 Azure SQL Databases.

They're really identical two apps running for different clients doing the same things using identical db-schema which we have checked two times manually and automatically via Visual Studio 2017 schema comparer.

Problem appears when we try to run app2 using db2. App1 with db1 works fine App2 with db1 also works fine. App1 with db2 needs 10 seconds to handle ANY request same for App2 with db2.

But... Both db's have the same schema are hosted on (basically) the same logical Azure server sharing exactly the same configuration. Even if it was for performance issues - db2 has less data in it so should be even faster (single table diff of 746k vs 1.6k records)

We have exhausted basically every option available we tried:

  • Recreating and redeploying the app on different AppServices and AppService sizes with the same result
  • Recreating and redeploying SQL Database to new/another SQL Server inside Azure even with upgrade from S2 to P2 performance
  • Running app locally with debugger on to see what's going on - the same result ~10.1s of spin up time for every request.

I couldn't find anything responsible for it either in Azure infrastructure or Application itself (app1/db1 pair works fine).

Any input is greatly appreciated.

,azure-web-app-service +11720243,"How is Geo Redundant Storage charged on Microsoft Azure?

SO I have been using the azure 3 month trial to test out whether I want to use Microsoft Azure to host a project I am working on however I have been very confused as I have run out of \Geo Redundant Storage\"" in the first month and I don't really understand why.

I have read this: https://www.windowsazure.com/en-us/pricing/details/ and the only thing I can make of it is that it takes an average of how much storage you are using across a month eg as long as I am using less then 35gb (for a 35gb limit) on average of storage space I am in the clear.

So under my Azure Subscriptsions 'STORAGE (GB/MONTH) - GEO REDUNDANT' it says '101.027% of 35 GB/month' (so I have reached my cap).

But I don't understand why this would be happening all I have is a simple server with a nodejs web application and a redis database (pretty much an empty at the moment) all running on an ubuntu VM and as I can't login and check storage now because it is disabled but I am pretty sure it is nowhere even near 35gb total storage and never has been?

I am hoping someone can explain how the azure storage is charged or if I have missed something silly?

Edit: It just hit me that it could be redis doing crazy things with IO? not sure if this is possible but if it is would I be better to use locally redundant storage and pay for locally redundant storage transactions?

Edit 2: On my graph it says I had been using 1.96gb / day. So that means its not the total harddrive space per month is it harddrive space / day? (using 2gbs of data probably sounds about right with the OS included if this is the case that means they give you less then 2gb space on the trial seems minute??)

\""Azure

""",azure-storage +54922750,"delete specific resource i.e vm nic nsg using terraform

I have created azure vm nic nsg inside the firewall. Now i need to delete specific created vm nic nsg inside the firewall. This i will be doing continuously.

When i try i delete with specific vm ns nic with below but it is deleting total resource group.

terraform init terraform apply -no-color -auto-approve terraform destroy -force 

My code:

# Configure the Microsoft Azure Provider provider \azurerm\"" {     subscription_id = \""xxxxx\""     client_id       = \""xxxxx\""     client_secret   = \""xxxxx\""     tenant_id       = \""xxxxx\"" }  # Locate the existing custom/golden image data \""azurerm_image\"" \""search\"" {   name                = \""AZLXSPTDEVOPS01_Image\""   resource_group_name = \""RG-EASTUS-SPT-PLATFORM\"" }  output \""image_id\"" {   value = \""/subscriptions/xxxxxxx/resourceGroups/RG-EASTUS-SPT-PLATFORM/providers/Microsoft.Compute/images/AZLXSPTDEVOPS01_Image\"" }  # Create a Resource Group for the new Virtual Machine. resource \""azurerm_resource_group\"" \""main\"" {   name     = \""RG-PF-TEST\""   location = \""eastus\"" }  # Create a Subnet within the Virtual Network resource \""azurerm_subnet\"" \""internal\"" {   name                 = \""SNET-IN\""   virtual_network_name = \""VNET-PFSENSE-TEST\""   resource_group_name  = \""${azurerm_resource_group.main.name}\""   address_prefix       = \""192.168.2.0/24\"" }  # Create a Network Security Group with some rules resource \""azurerm_network_security_group\"" \""main\"" {   name                = \""RG-Dev-NSG\""   location            = \""${azurerm_resource_group.main.location}\""   resource_group_name = \""${azurerm_resource_group.main.name}\""    security_rule {     name                       = \""allow_SSH\""     description                = \""Allow SSH access\""     priority                   = 100     direction                  = \""Inbound\""     access                     = \""Allow\""     protocol                   = \""Tcp\""     source_port_range          = \""*\""     destination_port_range     = \""22\""     source_address_prefix      = \""*\""     destination_address_prefix = \""*\""   } }  # Create a network interface for VMs and attach the PIP and the NSG resource \""azurerm_network_interface\"" \""main\"" {   name                      = \""NIC-Dev\""   location                  = \""${azurerm_resource_group.main.location}\""   resource_group_name       = \""${azurerm_resource_group.main.name}\""   network_security_group_id = \""${azurerm_network_security_group.main.id}\""    ip_configuration {     name                          = \""primary\""     subnet_id                     = \""${azurerm_subnet.internal.id}\""     private_ip_address_allocation = \""static\""     private_ip_address            = \""192.168.2.6\""   } }  # Create a new Virtual Machine based on the Golden Image resource \""azurerm_virtual_machine\"" \""vm\"" {   name                             = \""AZLXSPTDEVOPS01\""   location                         = \""${azurerm_resource_group.main.location}\""   resource_group_name              = \""${azurerm_resource_group.main.name}\""   network_interface_ids            = [\""${azurerm_network_interface.main.id}\""]   vm_size                          = \""Standard_DS12_v2\""   delete_os_disk_on_termination    = true   delete_data_disks_on_termination = true    storage_image_reference {     id = \""${data.azurerm_image.search.id}\""   }    storage_os_disk {     name              = \""AZLXSPTDEVOPS01-OS\""     caching           = \""ReadWrite\""     create_option     = \""FromImage\""     managed_disk_type = \""Standard_LRS\"" }    os_profile {     computer_name  = \""APPVM\""     admin_username = \""devopsadmin\""     admin_password = \""admin#2019\""   }    os_profile_linux_config {     disable_password_authentication = false   } } 

I need to delete only specific vm nic and nsg .Could anyone help me please

""",azure-virtual-machine +39034354,"Azure VM Resource Deployment Failed: \The system is not authoritative for the specified account\""

I have been using an Azure VM for several weeks: (Windows 10 Visual Studio Developer VM) But have been unable to login for several hours.

The machine is reported as running RDP finds the machine and presents the login box but Login fails: (Your credentials did not work)

The VM can be restarted but the same error occurs. Boot diagnostics shows the Windows 10 'beach cave' image

Attempts to reset the password give errors in the event log:

Failed to reset password At lease one resource deployment operation failed. Please list deployment operations for details. see https://aka.ms/arm-debug for usage details.

Then Deployment operations has this error:

Deployment failed Deployment to resource group 'MY_AZURE_GROUP' failed. Additional details from the underlying API that may be helpful. At least one deployment operation failed. Please list deployment operations for details.

Then this error expands to:

Status: Conflict Provisioning State: Failed

Type: Microsoft.Compute/virtualMachines/extensions

StatusMessage:

{   \""status\"": \""Failed\""    \""error\"": {     \""code\"": \""ResourceDeploymentFailure\""      \""message\"": \""The resource operation completed with terminal provisioning state 'Failed'.\""      \""details\"": [       {         \""code\"": \""VMExtensionProvisioningError\""          \""message\"": \""VM has reported a failure when processing extension 'enablevmaccess'. Error message: \\\""Cannot update Remote Desktop Connection settings for built-in Administrator account. Error: The system is not authoritative for the specified account and therefore cannot complete the operation. Please retry the operation using the provider associated with this account. If this is an online provider please use the provider's online site.\\r\\n\\\"".\""       }     ]   } } 

So I then tried Redeploying the VM: Which gave this error Failed to redeploy the virtual machine 'MY_AZURE_VM'. Error: VM has reported a failure when processing extension 'enablevmaccess'. Error message: \""Cannot update Remote Desktop Connection settings for built-in Administrator account. Error: The system is not authoritative for the specified account and therefore cannot complete the operation. Please retry the operation using the provider associated with this account. If this is an online provider please use the provider's online site.

The message \""The system is not authoritative for the specified account\"" hints at some permissions failure somewhere.

What does this mean - and how can I fix it?

""",azure-virtual-machine +54836251,"Azure Load Balancing Solution - Application Gateway or Azure Load Balancer

Note: I'm still in learning phase.

Question: For the scenario described below in the Load Balancing Settings for the two VMs for the FrontEnd subnet should I choose Application Gateway or Azure Load Balancer?

In Azure portal when I create the VMs for FrontEnd the Networking tab of the wizard gives me two choices shown below:

\""enter

Why the confusion:

For Load Balancing Internet Traffic to VMs this tutorial does not choose Application Gateway. But the 5th bullet of the following scenario seems to indicate I should choose Application Gateway

Scenario

This tutorial from official Azure team describes designing an infrastructure for a simple online store as follows:

\""enter

The above configuration incorporates:

  • A cloud-only virtual network with two subnets (FrontEnd and BackEnd)
  • Azure Managed Disks with both Standard and Premium disks
  • Four availability sets one for each tier of the online store
  • The virtual machines for the four tiers
  • An external load balanced set for HTTPS-based web traffic from the Internet to the web servers
  • An internal load balanced set for unencrypted web traffic from the web servers to the application servers A single resource group
""",azure-virtual-machine +46750128,10 Inserts by 10 Threads at the same time in Azure table storage using c#

What will happen if I get responses from 10 threads and few of them simultaneously or at the same time try to insert the responses in the table located in azure table storage. Will it break or throw error? If yes how to handle that in c#?

,azure-storage +42366988,"Pass variables at queue time to build when triggered by continuous integration

In VSTS I have a build definition configured to use Continuous Integration. In this definition I defined some custom variables which are allowed to set at queue time. However I only can set these variables when I manually trigger a build.

Is there a way to set these variables each build when using Continuous Integration?

""",azure-devops +39833020,Multiple Azure Sql Databases with one Web App Instance

Hi I'm working on an application that has 200+ SQL Azure databases and one Web App instance.

The web application is frequently calling all these databases depending on the request (only one database connection is used per request).

The problem we have seen lately is that timeouts/other connectivity issues is happening more frequently.

I'm starting to think that it could be all the tcp/connections that needs to be maintained by the connection pool. Because of Azure SQL there will be one database connection per database in the pool they can't share connection.

Is my assumption correct or could there be anything else?

,azure-web-app-service +54162952,Which is the default web server running on NodeJS based Azure Web App(Linux)?

Trying to deploy and access reactjs mono repo application on Azure WA. Which is the default web server running on NodeJS based Azure Web App(Linux)? How fo I run node.js apps?

,azure-devops +56355802,"Terraform deploying azure function resulting in \FAILED TO DOWNLOAD ZIP FILE.txt\""

Im trying to Deploy azure functions using terraform but i keep getting just a file named \""FAILED TO DOWNLOAD ZIP FILE.txt\"" instead of actual function deployed.

It works if i paste the actual SAS blob string extracted from azure(from previous deployed storage account) but the terraform script fails. the zip file seems to get deployed correctly to blob.

I pretty much copy pasted theis example here: http://vgaltes.com/post/deploying-azure-functions-using-terraform/

Im new to terraform so there may be something obvious im missing here...

 resource \""azurerm_resource_group\"" \""rg\"" {  name = \""myName\""  location = \""northEurope\"" }  resource \""random_string\"" \""storage_name\"" {  length = 16  special = false  upper = false } resource \""random_string\"" \""function_name\"" {  length = 16  special = false  upper = false } resource \""random_string\"" \""app_service_plan_name\"" {  length = 16  special = false }  resource \""azurerm_storage_account\"" \""storage\"" {  name = \""${random_string.storage_name.result}\""  resource_group_name = \""${azurerm_resource_group.rg.name}\""  location = \""${azurerm_resource_group.rg.location}\""  account_tier = \""Standard\""  account_replication_type = \""LRS\"" } resource \""azurerm_storage_container\"" \""storage_container\"" {  name = \""func\""  resource_group_name = \""${azurerm_resource_group.rg.name}\""  storage_account_name = \""${azurerm_storage_account.storage.name}\""  container_access_type = \""blob\"" }  resource \""azurerm_storage_blob\"" \""storage_blob\"" {  name = \""HelloWorld.zip\""  resource_group_name = \""${azurerm_resource_group.rg.name}\""  storage_account_name = \""${azurerm_storage_account.storage.name}\""  storage_container_name = \""${azurerm_storage_container.storage_container.name}\""  type = \""block\""  source = \""./../FunctionAppZip/HelloWorld.zip\"" } data \""azurerm_storage_account_sas\"" \""storage_sas\"" {  connection_string = \""${azurerm_storage_account.storage.primary_connection_string}\""  https_only = false resource_types {  service = false  container = false  object = true  } services {  blob = true  queue = true  table = true  file = true  } start = \""2019–05–21\""  expiry = \""2029–05–21\"" permissions {  read = true  write = true  delete = true  list = true  add = true  create = true  update = true  process = true  } }  resource \""azurerm_app_service_plan\"" \""plan\"" {  name = \""${random_string.app_service_plan_name.result}\""  location = \""${azurerm_resource_group.rg.location}\""  resource_group_name = \""${azurerm_resource_group.rg.name}\""  kind = \""functionapp\"" sku {  tier = \""Dynamic\""  size = \""Y1\""  } }  resource \""azurerm_function_app\"" \""function\"" {   name = \""${random_string.storage_name.result}\""   location = \""${azurerm_resource_group.rg.location}\""   resource_group_name = \""${azurerm_resource_group.rg.name}\""   app_service_plan_id = \""${azurerm_app_service_plan.plan.id}\""   storage_connection_string = \""${azurerm_storage_account.storage.primary_connection_string}\""   app_settings {     FUNCTIONS_WORKER_RUNTIME = \""dotnet\""     FUNCTION_APP_EDIT_MODE = \""readwrite\""     https_only = false     HASH = \""${base64sha256(file(\""./../FunctionAppZip/HelloWorld.zip\""))}\""     WEBSITE_RUN_FROM_PACKAGE = 1         WEBSITE_USE_ZIP = \""https://${azurerm_storage_account.storage.name}.blob.core.windows.net/${azurerm_storage_container.storage_container.name}/${azurerm_storage_blob.storage_blob.name}${data.azurerm_storage_account_sas.storage_sas.sas}\""   } } 

When i download azure function content its just a file there named \""FAILED TO DOWNLOAD ZIP FILE.txt\""

containing this:

% Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed

0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 curl: (22) The requested URL returned error: 403 Server failed to authenticate the request. Make sure the value of Authorization header is formed correctly including the signature.

Any suggestions what im doing wrong?

""",azure-functions +55694797,How To Create Database Index Without Waiting To Complete

I'm using Azure Functions to insert lots of data into a SQL Server database once per month but at the same time I want to create an index on one of the tables.

Is it possible to kick off the create index and have it run until complete without it tying up the rest of the function.

In other words can the function complete and the index carries on building in the background?

Thanks

,azure-functions +46561839,"How to RDP connect to an Azure VM

I would like to run some tests on some VM machines. The machines belong to different users with different MSDN accounts which means private passwords.

What I did was so far is to create an Azure VM for each MSDN account and set a similar user name/password for the machine.

What I would like to do is to:

  1. Connect to any of these VMs. My problem: I don't know the machine name. I tried to connect using the rdp file provided by Azure and it's working but the problem is that it's using an IP instead of a name. I tried finding the machine name but all documentation about this seems to be outdated. \""This. I tried to connect to amam10x64.westeurope.cloudapp.azure.com but without success.

  2. Copy a file to/from the VM. My hope is that I can use the following snippet:

    $commandStr = [string]::Format(\""Copy-VMFile \""\""{0}\""\"" -SourcePath \""\""{1}\""\"" - DestinationPath \""\""{2}\""\"" -CreateFullPath -FileSource Host -Force\"" $VM $SessionPath $RemoteFullPath) $commandBlock = [scriptblock]::Create($commandStr) Invoke-Command -Session $sess -ScriptBlock $commandBlock

  3. Run a command on the VM. Hopefully I can use same command from Pt. 2.

""",azure-virtual-machine +56682112,"How I can update test case execution status in DevOps using API

I need to update the test case execution status (\Pass\"" or \""Fail\"") once the test case is executed. This needs to be done through pytest execution. I looked into a couple of resources but I don't get any way to update the Test Case execution status like \""Pass\"" or \""Fail\"" through API. Along with that I can get the execution detail with Execution ID but there is no reference found using I can get execution details of a Test Case by Test Case ID.

Please guide me here.

""",azure-devops +49337761,"Debug high CPU usage in Azure WebApp (Linux)

I have set up an Azure WebApp (Linux) to run a WordPress and an other handmade PHP app on it. All works fine but I get this weird CPU usage graph (see below).

Both apps are PHP7.0 containers.

SSHing in to the two containers and using top I see no unusual CPU hogging processes.

When I reset both apps the CPU goes back to normal and then starts to raise slowly as shown below.

The amount of HTTP requests to the apps has not relation to the CPU usage at all.

I tried to use apache2ctl to see if there are any pending requests but that seems not possible to do inside a docker container.

Anybody got an idea how to track down the cause of this?

\""CPU

This is the top output. The instance has 2 cores. Lots of idle time but still over 100% load and none of the processes use the CPU ...

\""enter

""",azure-web-app-service +25235117,How to ask/reply questions in a work item in Visual Studio Online?

In Visual studio online someone created a work item for me but I didn't quite understand. How to ask questions in a work item?

,azure-devops +50944583,"Azure Function - Could not load file or assembly

Im running an Azure Function .NET Standard 2.0 and get following error:

An exception of type 'System.IO.FileLoadException' occurred in Function.dll but was not handled in user code Could not load file or assembly 'Microsoft.WindowsAzure.Storage Version=9.2.0.0 Culture=neutral PublicKeyToken=31bf3856ad364e35'.

\""Reference\""

The assembly file exist in the bin/debug folder. Been reading some threads about this but without a solution (https://github.com/Azure/azure-functions-core-tools/issues/322#issuecomment-352233979) anyone know what to do?

I'm using code from another .NET Standard 2.0 project but all my projects have a reference to Microsoft.WindowsAzure.Storage 9.2.0.0 and that nuget package installed.

Thanks!

""",azure-functions +51402803,sync data between two storage accounts in azure

Consider I have two Storage account i e. Storage 1 Storage 2. When there is an entry to the Storage 1 the entry should be automatically synced to Storage 2 in Azure for all(table file blob). Is there anyway?

,azure-storage +16692109,"How do you deploy an EXE file into my Azure account?

I have an exe file which when called opens a pdf viewer... i used it in my web application and and tried to deploy it on my azure cloud as a service. every thing worked fine but when i clicked the link or button under which i called it .it gives a runtime error as \location changed or moved\""...all i want to open that viewer application to run on my azure cloud.

i m sorry if i asked a wrong question at this place .but my only need is that i want to deploy that exe in working condition . i m new to the cloud environment plzzz help me ..dont mark this question as ambiguous or can not be answered. plzz suggest me any way by which i will be able to do so .eiher cloud service or cloud website or virtual machine etc. any single word from your wise mind will help me a lot.

the code that i used to call that exe is.

protected void Button1_Click(object sender  EventArgs e)     {        // Create An instance of the Process class responsible for starting the newly process.          System.Diagnostics.Process process1 = new System.Diagnostics.Process();          // Set the directory where the file resides          process1.StartInfo.WorkingDirectory = Request.MapPath(\""~/\"");          // Set the filename name of the file you want to open          process1.StartInfo.FileName = Request.MapPath(\""ProjectViewer.exe\"");          // Start the process         process1.Start();     } 
""",azure-storage +53375193,"Azure storage not finding csv file

I am trying to read a csv file from my azure storage account. To convert each line into an object and build a list of those objects. It keeps erring and the reason is it cant find the file (Blob not found). The file is there It is a csv file.

\""File

Error:

StorageException: The specified blob does not exist. BatlGroup.Site.Services.AzureStorageService.AzureFileMethods.ReadCsvFileFromBlobAsync(CloudBlobContainer container string fileName) in AzureFileMethods.cs + await blob.DownloadToStreamAsync(memoryStream);

 public async Task<Stream> ReadCsvFileFromBlobAsync(CloudBlobContainer container  string fileName)     {         // Retrieve reference to a blob (fileName)         var blob = container.GetBlockBlobReference(fileName);          using (var memoryStream = new MemoryStream())         {             //downloads blob's content to a stream              await blob.DownloadToStreamAsync(memoryStream);             return memoryStream;          }      } 

I've made sure the file is public. I can download any text file that is stored there but none of the csv files.

I am also not sure what format to take it in as I need to iterate through the lines.

I see examples of bringing the whole file down to a temp drive and working with it there but that seems unproductive as then I could just store the file in wwroot folder instead of azure.

What is the most appropriate way to read a csv file from azure storage.

""",azure-storage +55964626,"Node cannot read environmental variable from Azure

I'm following an introductory course on Azure Web Apps. One specific tutorial shows how to get an environmental parameter previously set from the Azure portal and display it in your webpage but this doesn't work for me.

The code is really simple and I'm only pasting the server response where the env parameter should go

var server = http.createServer(function(request  response) {     response.writeHead(200  {\Content-Type\"": \""text/html\""});     response.write(\""<!DOCTYPE html>\"");     response.write(\""<html>\"");     response.write(\""<head>\"");     response.write(\""<title>Hello</title>\"");     response.write(\""</head>\"");     response.write(\""<body>\"");     response.write(`Hello from ${process.env.MyParameter}!`);  //PROBLEM HERE     response.write(\""</body>\"");     response.write(\""</html>\"");     response.end(); }); 

Of course I've set up a new application setting in my Azure app Configuration that is called MyParameter.
Now if I want to display some plain text such as response.write(\""Hello world\""); it works perfectly but when I try to get the env variable I get a HTTP ERROR 500 - This page isn't working error.

What am I doing wrong?

""",azure-web-app-service +49981159,"Calling Azure function from Angular 1.6 CORS problems

I'm trying to call an Azure function of mine failing:

$http.post(url  {data: data  headers: headers})   .success(function (jSendResponse  status  headers) {     console.warn(\worked\"");   })   .error(function (errResponse) {     console.warn('failed')   }); 

Response to preflight request doesn't pass access control check: No 'Access-Control-Allow-Origin' header is present on the requested resource. Origin '<Origin domain>' is therefore not allowed access. The response had HTTP status code 400.

I have changed the CORS setting for this particular Azure function. First by specifying my exact domain. And then by adding * as a last entry in the list of allowed origins.

But the error message remains.

What am I doing wrong?

""",azure-functions +42831038,"ARM Template containing config settings for web app

I am encountering strange behavior when deploying an ARM template.

I have the following template: (Note that sasUrl value 'xxx' has a real working value in my file)

{   \name\"": \""[variables('webAppServiceName')]\""    \""type\"": \""Microsoft.Web/sites\""    \""location\"": \""[resourceGroup().location]\""    \""apiVersion\"": \""2016-08-01\""    \""dependsOn\"": [     \""[concat('Microsoft.Web/serverfarms/'  variables('appServicePlanName'))]\""   ]    \""tags\"": {     \""[concat('hidden-related:'  resourceGroup().id  '/providers/Microsoft.Web/serverfarms/'  variables('appServicePlanName'))]\"": \""Resource\""      \""displayName\"": \""[variables('webAppServiceName')]\""   }    \""properties\"": {     \""name\"": \""[variables('webAppServiceName')]\""      \""serverFarmId\"": \""[resourceId('Microsoft.Web/serverfarms'  variables('appServicePlanName'))]\""   }    \""resources\"": [     {       \""apiVersion\"": \""2014-11-01\""        \""name\"": \""appsettings\""        \""type\"": \""config\""        \""dependsOn\"": [         \""[concat('Microsoft.Web/sites/'  variables('webAppServiceName'))]\""          \""[concat('Microsoft.Web/certificates/'  variables('certificateName'))]\""       ]        \""tags\"": {         \""displayName\"": \""WebAppSettings\""       }        \""properties\"": {         \""WEBSITE_LOAD_CERTIFICATES\"": \""[reference(resourceId('Microsoft.Web/certificates'  variables('certificateName'))  providers('Microsoft.Web'  'certificates').apiVersions[0]).thumbprint]\""       }     }      {       \""apiVersion\"": \""2016-08-01\""        \""name\"": \""Microsoft.ApplicationInsights.Profiler.AzureWebApps\""        \""type\"": \""siteextensions\""        \""dependsOn\"": [         \""[resourceId('Microsoft.Web/Sites'  variables('webAppServiceName'))]\""       ]        \""properties\"": {}     }      {       \""apiVersion\"": \""2015-08-01\""        \""name\"": \""logs\""        \""type\"": \""config\""        \""dependsOn\"": [         \""[resourceId('Microsoft.Web/Sites'  variables('webAppServiceName'))]\""       ]        \""properties\"": {         \""applicationLogs\"": {           \""fileSystem\"": {             \""level\"": \""Off\""           }            \""azureTableStorage\"": {             \""level\"": \""Off\""           }            \""azureBlobStorage\"": {             \""level\"": \""[parameters('applicationLogLevel')]\""              \""sasUrl\"": \""xxx\""           }         }          \""httpLogs\"": {           \""fileSystem\"": {             \""enabled\"": false           }            \""azureBlobStorage\"": {             \""enabled\"": true              \""sasUrl\"": \""xxx\""           }         }          \""failedRequestsTracing\"": {           \""enabled\"": \""[parameters('enableFailedRequestTracing')]\""         }          \""detailedErrorMessages\"": {           \""enabled\"": \""[parameters('enableDetailedErrorMessages')]\""         }       }     }   ] } 

When deploying this template without modifying anything the config section 'logs' is not deployed correctly +- 1 on 2 times. I have just tested the ARM template again and the first deployment the web app had not the correct settings for diagnostics logging. The second time neither but the third time they were ok. But the fourth time the settings were not correct anymore. It looks like this part of the template has no consistent behavior.

Am I overseeing something?

""",azure-web-app-service +41781174,"Update work item relations/links in VS Team Services

I am trying to use the VSTS API to remove all parent links on items and set those parents as related items.

https://www.visualstudio.com/en-us/docs/integrate/api/wit/work-items#update-work-items

I do not fully understand how the \""Path\"" needed to remove relations work – I am getting inconsistent results where sometimes it works sometimes not (so im clearly doing it wrong)

I am making an assumption that its simply the order returned by the API. So for example:

  • Index[0] item
  • Index[1] item
  • Index[2] item <- this is the one I want to remove so I use index 2

        public void RemoveParentLink(int pathIndex  int itemToUpdate  string link) {      JsonPatchDocument patchDocument = new JsonPatchDocument();      patchDocument.Add(        new JsonPatchOperation()        {            Operation = Operation.Remove             Path = $\""/relations/{pathIndex}\""         }     );      WorkItem result = witClient.UpdateWorkItemAsync(patchDocument  itemToUpdate).Result;  } 

The documentation states that Path is:

Path to the value you want to add replace remove or test. For a specific relation use \""relations/Id\"". For all relations use \""/relations/-\"".

Index is NOT the Id of course but how do I get the relation/Id exactly?

""",azure-devops +40606892,Prevent a public service from overusing

I have a web api 2 that I want to host on azure-app-service. The service should be called by javascript applications so as far as I know it has to be open to public (right?).

However if I let it be totally open it is vulnerable to DOS. What is the best way to do that?

The first thing that came to my mind was to implement a custom IP Filter that keeps requests from last x minutes and let the one with less than y occurrence pass.

Is there any other way? Is there any specific way to do it on the azure without writing code?


This is not a broad question! I think it is clear what I am asking! I have a service on Azure and I want to protect it from overusing. How broad is that?!?!

,azure-web-app-service +47281734,"Azure ML Web Service + Python for Querying Pandas Data Frame

I want to use Azure ML Web Service for a non machine learning task with Python. The goal is the following:

I have a Pandas DF like this:

   Id   Value 0  111  0.1 1  222  7.3 2  333  3.1 3  444  5.0 

I can query this DF successfully (what is the value of a certain row by Id?):

float(df.loc[pot['Id'] == 222  'Value']) 

Now I want to deploy a function in Azure ML Web Service with this functionality where a function uses an uploaded data set as fix lookup table. I constructed the function which gets an Id number as argument looks for the value in the pre-uploade dataset and gives it back as a float:

from azureml import services import pandas as pd  @services.publish(workspace_id  workspace_token) @services.types(id=int) @services.returns(float) def my_func(id):     my_df = ws.datasets[\uploaded_df.csv\""].to_dataframe()     return float(my_df.loc[cent['Id'] == id  'Value']) 

I can deploy it on Azure Web Services but when I try to run a test query It gets stuck (no way even to peep into the details). What is the problem here?

""",azure-web-app-service +47328693,Fail over from Azure VM to Azure Web App

We want to fail over from Azure VM (Windows Server) to Azure Web App. VM hosts the web application and Web App is just a page informing Website Under Maintenance message. This is for planned/un-planned maintenance events of the server. Once the maintenance is over how to retract it back. Thanks in advance.

,azure-virtual-machine +47762970,"How to cancel an upload started with BlobService.createBlockBlobFromBrowserFile?

I'm using Microsoft Azure Storage Client Library's BlobService.createBlockBlobFromBrowserFile to allow users to upload files to an Azure Storage container. I'd like to provide a way for them to cancel an in-progress upload e.g. in case it's big and taking too long or they chose the wrong file. Is there any way I can do this though? I can't see anything obvious in the API.

My code is based on these samples e.g.

var file = document.getElementById('fileinput').files[0];  var customBlockSize = file.size > 1024 * 1024 * 32 ? 1024 * 1024 * 4 : 1024 * 512; blobService.singleBlobPutThresholdInBytes = customBlockSize;  var finishedOrError = false; var speedSummary = blobService.createBlockBlobFromBrowserFile('mycontainer'  file.name  file  {blockSize : customBlockSize}  function(error  result  response) {     finishedOrError = true;     if (error) {         // Upload blob failed     } else {         // Upload successfully     } }); refreshProgress(); 

The SpeedSummary object returned from createBlockBlobFromBrowserFile is I think this one which doesn't have anything like that available.

Also asked on MSDN here.

""",azure-storage +42642833,Could not load file or assembly 'Microsoft.Owin.Host.SystemWeb on VSTS

Locally all my test run fine but when do a build on VSTS I get this error.

##[error]System.Reflection.TargetInvocationException: Exception has been thrown by the target of an invocation. ---> System.IO.FileNotFoundException: Could not load file or assembly 'Microsoft.Owin.Host.SystemWeb  PublicKeyToken=31bf3856ad364e35' or one of its dependencies. The system cannot find the file specified.WRN: Assembly binding logging is turned OFF. 

Though the nuget restore build step says:

Restoring NuGet package Microsoft.Owin.Host.SystemWeb.3.0.1. 

and

Adding package 'Microsoft.Owin.Host.SystemWeb.3.0.1' to folder 'D:\a\1\s\packages' Added package 'Microsoft.Owin.Host.SystemWeb.3.0.1' to folder 'D:\a\1\s\packages' 
,azure-devops +54347532,AI (Internal): Reached message limit. End of EventSource error messages

I am performing activities as EventHubTrigger with Azure Function.

This trace message is popping up all over our AppInsights instance. I don't know what is means or what could be the cause. I'm happy to provide any details that can help debug.

Trace message

2019-01-24T04:12:33.467 AI (Internal): Reached message limit. End of EventSource error messages.

sdkVersion : dotnet:2.7.2-23439

2019-01-24T04:12:33.467 AI (Internal): [Microsoft-ApplicationInsights-Core] Reached message limit. End of EventSource error messages.

sdkVersion : dotnet:2.8.1-22898

,azure-functions +7118942,In Windows Azure: What are web role worker role and VM role?

The application I work on contains a web role: it's a simple web application. I needed to host the application in Windows Azure so I created a web role. I actually want to know what these roles are for. What is their significance coding wise or storage wise?

,azure-virtual-machine +52521171,"What determines which projects in VisualStudio are consider to be publish artifacts in Azure Devops Build

I have a solution with multiple projects(.NET). Most of them Web applications and one Console Application.

I have created a new solution configuration in Visual Studio called QA.

On my Azure DevOps CI Build I have set up my BuildConfiguration to QA.

Everything builds but when I check in my Publish Artifact I don't see my Console Application project artifact.

Anybody else having this issue ?

What determines what projects get publish as an Artifact ?

Thanks In advance\""Build

""",azure-devops +51145124,"How to list all blobs inside of a specific subdirectory in Azure Cloud Storage using Python?

I worked through the example code from the Azure docs https://docs.microsoft.com/en-us/azure/storage/blobs/storage-quickstart-blobs-python

from azure.storage.blob import BlockBlobService account_name = \""x\"" account_key = \""x\"" top_level_container_name = \""top_container\""  blob_service = BlockBlobService(account_name  account_key)  print(\""\\nList blobs in the container\"") generator = blob_service.list_blobs(top_level_container_name) for blob in generator:     print(\""\\t Blob name: \"" + blob.name) 

Now I would like to know how to get more fine grained in my container walking. My container top_level_container_name has several subdirectories

  • top_level_container_name/dir1
  • top_level_container_name/dir2
  • etc in that pattern

I would like to be able to list all of the blobs that are inside just one of those directories. For instance

  • dir1/a.jpg
  • dir1/b.jpg
  • etc

How do I get a generator of just the contents of dir1 without having to walk all of the other dirs? (I would also take a list or dictionary)

I tried adding /dir1 to the name of the top_level_container_name so it would be top_level_container_name = \""top_container/dir1\"" but that didn't work. I get back an error code azure.common.AzureHttpError: The requested URI does not represent any resource on the server. ErrorCode: InvalidUri

The docs do not seem to even have any info on BlockBlobService.list_blobs() https://docs.microsoft.com/en-us/python/api/azure.storage.blob.blockblobservice.blockblobservice?view=azure-python

Update: list_blobs() comes from https://github.com/Azure/azure-storage-python/blob/ff51954d1b9d11cd7ecd19143c1c0652ef1239cb/azure-storage-blob/azure/storage/blob/baseblobservice.py#L1202

""",azure-storage +56547808,"App service to app service auth in Azure using Managed Identity

I have set up two App Services in Azure. 'Parent' and 'Child' both expose API endpoints.

  • Child has endpoint 'Get'.
  • Parent has endpoints 'Get' and 'GetChild' (which calls 'Get' on Child using HttpClient).

I want all Child endpoints to require auth via Managed Identity and AAD and I want all Parent endpoints to allow anonymous. However in Azure I want to set the Parent App Service to have permission to call the Child App Service. Therefore Child endpoints are only accessible by using Parent endpoints (or if you have permissions on a user account to directly use Child).

In the Azure Portal:

Authentication/Authorization

  • I have enabled 'App Service Authentication' on both App Services.
  • Child is set to 'Log in with AAD'.
  • Parent is set to 'Allow Anonymous requests'.
  • Both have AAD configured under 'Authentication Providers'.

Identity

  • Set to 'On' for both App Services

Access control (IAM)

  • Child has Parent as Role Assignment Type = \App Service or Function App\"" and Role = \""Contributer\""

With all the above setup:

  • Calling Child -> Get requires me to log in
  • Calling Parent -> Get returns the expected response of 200 OK
  • Calling Parent -> GetChild returns \""401 - You do not have permission to view this directory or page\""

Without the use of Client ids/Secrets/Keys/etc as I thought the idea behind Managed Identity was to throw that all out the window given all the above should Parent be able to call Child? And if so what have I setup wrong?

""",azure-web-app-service +34727829,"How to delete a folder within an Azure blob container

I have a blob container in Azure called pictures that has various folders within it (see snapshot below):

\""enter

I'm trying to delete the folders titled users and uploads shown in the snapshot but I keep the error: Failed to delete blob pictures/uploads/. Error: The specified blob does not exist. Could anyone shed light on how I can delete those two folders? I haven't been able to uncover anything meaningful via Googling this issue.

Note: ask me for more information in case you need it

""",azure-storage +47082856,Azure Function App in PowerShell generating host threshold exceeded errors

We just started getting this error yesterday but haven't changed anything in our app. Any ideas? If we restart the function app it will run for a short time and then start giving us this error again. The function app is in PowerShell.

Host Error: Microsoft.Azure.WebJobs.Script: Host thresholds exceeded: [Connections] 
,azure-functions +10302396,"Azure Role Environment not initialising

My project has suddenly stopped working. I am using local storage and when I try to initialise the role environment it says:

\Microsoft.WindowsAzure.ServiceRuntime Error: 102 : Role environment . FAILED TO INITIALIZE\""

and an SEH exception occurs with error code \""-2147467259\"". I start a new instance of the cloud part of my project and then attempt to start a new instance of my WPF application in the same solution. I think when the WPF application is run it stops the cloud instance deployment. But I am not sure.

""",azure-storage +28119248,"detect production or staging in cloud service in azure

Is there a way to detect if I am in staging or production ? there is a RoleInstance class but it doesn't have a field for this metter.

There is a microsoft code but it's complicated. the link is https://code.msdn.microsoft.com/windowsazure/CSAzureDeploymentSlot-1ce0e3b5#content but I don't understand if the hostedServiceName in the code is my project name cloud project name or cloud service name and what is thrumbnail where do I get it from and how do I register it in the azure ?

""",azure-web-app-service +45506009,"Event ID 2284 and 2289 logged Azure Web Apps

Our event log in Azure Web App has a bunch of Event 2289 and 2284 error entries by W3SVC-WP:

\""enter

The messages are either something like this:

1
5
\\?\\D:\\home\\LogFiles\\W3SVC442144452\\
02000780

Or like this:

1
5
50000780

I'm not sure where these come from but they seem to inhibit errors from logging correctly.

""",azure-web-app-service +57633009,"How to configure the @CosmosDBtrigger using java?
  1. I'm setting up @CosmosDBTrigger need help with the below code and also what needs to be in the name field?

I'm using below Tech stack

JDK 1.8.0-211 apache maven 3.5.3 AzureCLI 2.0.71 .net core 2.2.401

Java: public class Function {

    @FunctionName(\CosmosTrigger\"")     public void mebershipProfileTrigger(             @CosmosDBTrigger(name = \""?\""  databaseName =              \""*database_name*\""  collectionName = \""*collection_name*\""                leaseCollectionName = \""leases\""                createLeaseCollectionIfNotExists = true                connectionStringSetting = \""DBConnection\"") String[] items               final ExecutionContext context) {                   context.getLogger().info(\""item(s) changed\"");        }    } 

What do we need to provide in the name field?

local.settings.json

{   \""IsEncrypted\"": false    \""Values\"": {      \""DBConnection\"": \""AccountEndpoint=*Account_Endpoint*\""   } } 

Expected: function starts

Result: \""Microsoft.Azure.WebJobs.Host: Error indexing method 'Functions.Cosmostrigger'. Microsoft.Azure.WebJobs.Extensions.CosmosDB: Cannot create Collection Information for collection_name in database database_name with lease leases in database database_name : Unexpected character encountered while parsing value: <. Path '' line 0 position 0. Newtonsoft.Json: Unexpected character encountered while parsing value: <. Path '' line 0 position 0.\""

""",azure-functions +56802924,"Unable to Communicate from one azure VM in same virtual net to the secondary NIC of another azure VM

I have an azure Linux VM say VM1 having only one network interface with private IP 10.3.0.5 I have another azure Linux VM say VM2 with two network interfaces the private IP on primary network interface is 10.3.5.4 the private IP on the secondary network interface is 10.3.4.4. Now I am able to ping VM2 from VM1 on primary network interface of VM2 as ping 10.3.5.4 but I am not able to ping it in on secondary network interface as ping 10.3.4.4.

After reading azure docs - https://docs.microsoft.com/en-gb/azure/virtual-machines/linux/multiple-nics#configure-guest-os-for-multiple-nics they say that you would have to manually add required routes to achieve this.

Similar problem in windows VM - https://support.microsoft.com/en-in/help/4048050/troubleshooter-for-azure-vm-connectivity-problems

here is the result of command route -n on VM2

Kernel IP routing table Destination     Gateway         Genmask         Flags Metric Ref    Use Iface 0.0.0.0         10.3.5.1        0.0.0.0         UG    0      0        0 eth0 10.3.4.0        0.0.0.0         255.255.255.0   U     0      0        0 eth1 10.3.5.0        0.0.0.0         255.255.255.0   U     0      0        0 eth0 168.63.129.16   10.3.5.1        255.255.255.255 UGH   0      0        0 eth0 169.254.169.254 10.3.5.1        255.255.255.255 UGH   0      0        0 eth0 172.17.0.0      0.0.0.0         255.255.0.0     U     0      0        0 docker0 172.18.0.0      0.0.0.0         255.255.0.0     U     0      0        0 docker_gwbridge 

Now as per the above azure docs it is missing an entry something like this

0.0.0.0        10.3.4.1         0.0.0.0        UG     0      0        0  eth1  

I tried adding this route to my VM2 but after running the following command my vm just stopped responding route add -net 0.0.0.0 netmask 0.0.0.0 gw 10.3.4.1 dev eth1

How do I add the correct route for my problem ? Please help !

""",azure-virtual-machine +29308219,Azure VM - strange fail

I was using my Azure VM and when installing SQL Express the RDP session stopped. Cant reconnect RDP but asks for password and if wrong denies access. Alsso tried to reboot and shutdown from Azure Web Panel and Azure PowerShell with no success. What can I do more?

Thanks

Paulo

,azure-virtual-machine +48875008,"Did it create VM?

So i'm new to Azure - Registered about a hour ago. I created Free Trial account but connected my Card to it so i could get 200$ for Trial. +++ I tried to create UbuntuNC24 VM - Wanted to see its hash rate (NOT for CryptoCurrency).

Its very expensive VM ~ 9$/h but I got that 200$ for testing so I wanted to see how would it perform.

But when I created everything and clicked \Finis\"" or however that last button is called I got some API error. After that I refreshed page and it was on \""Dashboard\"" as creating/starting... And then it disappeared from there.

I looked under VM in menu and on all Resources and its not there. Like I did not created it.

My question is - Did i create it and if I did where did it go? I guess it would be very expensive mistake to just leave it..

Its debit card with 2$ balance on it but I don't want to hog on VM if I'm not using it or to go in debt with Azure (if it's possible).

""",azure-virtual-machine +49301247,"Serving an HTML Page from Azure PowerShell Function

I try to serve a HTML Page from an Azure PowerShell Function. I am able to return the HTML but I have clue where I can set the content type to text/html in order that the Browser interprets the HTML.

Here is an example from Anythony Chu how you can do it in C#:

public static HttpResponseMessage Run(HttpRequestMessage req  TraceWriter log) {     var response = new HttpResponseMessage(HttpStatusCode.OK);     var stream = new FileStream(@\""d:\\home\\site\\wwwroot\\ShoppingList\\index.html\""  FileMode.Open);     response.Content = new StreamContent(stream);     response.Content.Headers.ContentType = new MediaTypeHeaderValue(\""text/html\"");     return response; } 

But in a PowerShell function I just return the file using the Out-File cmdlet and don't have the option to set the content type. Here a hello world example:

# POST method: $req $requestBody = Get-Content $req -Raw | ConvertFrom-Json $name = $requestBody.name  # GET method: each querystring parameter is its own variable if ($req_query_name)  {     $name = $req_query_name  }  $html = @' <html> <header><title>This is title</title></header> <body> Hello world </body> </html> '@  Out-File -Encoding Ascii -FilePath $res -inputObject $html 

Here is how the response looks like in the Browser:

\""enter

Any idea how I can set the content type so that the Browser interprets the HTML?

""",azure-functions +15294122,"Generic table storage entity retrieval

I am playing with azure table storage entity retrieval and got a good ms example

http://www.windowsazure.com/en-us/develop/net/how-to-guides/table-services-v17/#retrieve-range-entities

in case ur bored to check the link

// Retrieve storage account from connection string CloudStorageAccount storageAccount = CloudStorageAccount.Parse( CloudConfigurationManager.GetSetting(\""StorageConnectionString\""));  // Create the table client CloudTableClient tableClient = storageAccount.CreateCloudTableClient();  // Get the data service context TableServiceContext serviceContext = tableClient.GetDataServiceContext();  // Specify a partition query  using \""Smith\"" as the partition key CloudTableQuery<CustomerEntity> partitionQuery = (from e in serviceContext.CreateQuery<CustomerEntity>(\""people\"")  where e.PartitionKey == \""Smith\""  select e).AsTableServiceQuery<CustomerEntity>();  // Loop through the results  displaying information about the entity foreach (CustomerEntity entity in partitionQuery) { Console.WriteLine(\""{0}  {1}\\t{2}\\t{3}\""  entity.PartitionKey  entity.RowKey      entity.Email  entity.PhoneNumber); } 

now this works perfect.but i want to generalize it.. so i want to pass customerEntity as a parameter people as parameter(easy string tablename) and make it reusable.

so trick is for passing customerentity as parameter pleas help :)

""",azure-storage +51305138,Microsoft Azure - 2 VMs behind a public facing Load Balancer

I want to have 2 VMs behind a public facing Load Balancer. Both the VMs are in an Availability Set spread across 2 fault domains and 5 update domains (the defaults set for availability sets on the portal).
Strangely the Load Balancer does show the Availability Set when I try to configure the Backend Pool and hence I'm unable to configure Inbound NAT Rules for the VMs.
Both the VMs do not have public IPs. They just have their private IPs.
How do I proceed ?

,azure-virtual-machine +43982972,"Azure Functions - Shared code across Function Apps

Is there a way of sharing common code across two different Function Apps in Azure?

I understand it is possible to share code between two functions under the same Function App like so:

#load \../Shared/ServiceLogger.csx\"" 

but I would like to share logging code between two different functions each under it's own Function App. The reason for the functions being under two different Function Apps is that I need one to run on the Consumption plan and the other run on an App Service plan unless there is another way of doing this?

""",azure-functions +42117135,"send a full brokered message in azure service bus from an azure function

I'm facing issue to send a complete brokered message to an azure service bus output in azure function in javascript. The documentation only show a simple body message https://docs.microsoft.com/en-us/azure/azure-functions/functions-bindings-service-bus without any customerProperties.

My attempts to create a full brokered message failed so far everything goes into the body.

\""enter \""enter

var message = {'body' : 'test'  'customProperties' : {'fromsystem':'sap'}}; context.bindings.outputSbMsg = message; context.done(null  res); 
""",azure-functions +54216929,"Uniquely identify disk

When I provisioned the Azure VM I explicitly gave a name for each disk based on what type SQL Server files/database it is going to be used for. Please see the image below. \""enter

However Get-PhysicalDisk command output as \""enter

What I am trying to do is create Storage Pool based on the name I specified during VM creation i.e. Create a storage pool called TempDB using _TempDBData_1 and _TempDBData2

Thanks

""",azure-virtual-machine +43760323,"Node.js Azure sdk - getting the Virtual Machine state

I've started to look into the azure sdk for node.js (link below) and interestingly enough I've hit a wall in what I'd image would be one of the most common tasks one would want to achieve using Azure's REST endpoints which is checking the status of a virtual machine.

I can easily get a list of all machine or one in particular but the response from this services don't include the current status of the VM (running stopped etc.)

There's absolutely no info out there regarding this particular scenario in the docos or the web other than a blog post (https://github.com/Azure/azure-xplat-cli/issues/2565) which is actually in regards of a different library.

Please not that I'm using the azure-arm-compute library which is part of the Node.js azure sdk.

Any help would be very much appreciated

github repo: https://github.com/Azure/azure-sdk-for-node

""",azure-virtual-machine +54182432,"HTTP 500.79 Error / System.UriFormatException when deploying ASP.NET App to Azure Web App

I am attempting to set up a Staging Environment for an ASP.NET MVC Application and want to do that as an Azure Web App but I am really stuck on an HTTP 500 at this point.

The error I get is:

500.79: The request failed because of an unhandled exception in the Easy Auth module. 

Using diagnostics logs I was able to get a Stack Trace:

2019-01-14T13:07:22  PID[14448] Critical    System.Reflection.TargetInvocationException: Exception has been thrown by the target of an invocation. ---> System.Reflection.TargetInvocationException: Exception has been thrown by the target of an invocation. ---> System.UriFormatException: Invalid URI: The format of the URI could not be determined.    at System.Uri.CreateThis(String uri  Boolean dontEscape  UriKind uriKind)    at System.Uri..ctor(String uriString  UriKind uriKind)    at Microsoft.Azure.AppService.Middleware.ModuleConfig.set_OpenIdIssuer(String value)    --- End of inner exception stack trace ---    at System.RuntimeMethodHandle.InvokeMethod(Object target  Object[] arguments  Signature sig  Boolean constructor)    at System.Reflection.RuntimeMethodInfo.UnsafeInvokeInternal(Object obj  Object[] parameters  Object[] arguments)    at System.Reflection.RuntimeMethodInfo.Invoke(Object obj  BindingFlags invokeAttr  Binder binder  Object[] parameters  CultureInfo culture)    at System.Reflection.RuntimePropertyInfo.SetValue(Object obj  Object value  BindingFlags invokeAttr  Binder binder  Object[] index  CultureInfo culture)    at System.Reflection.RuntimePropertyInfo.SetValue(Object obj  Object value  Object[] index)    at Microsoft.Azure.AppService.Middleware.MiddlewareConfig.TryLoadConfig(Type type  HttpContextBase context)    at Microsoft.Azure.AppService.Middleware.ModuleConfig.EnsureConfigLoaded(HttpContextBase context)    --- End of inner exception stack trace ---    at System.RuntimeMethodHandle.InvokeMethod(Object target  Object[] arguments  Signature sig  Boolean constructor)    at System.Reflection.RuntimeMethodInfo.UnsafeInvokeInternal(Object obj  Object[] parameters  Object[] arguments)    at System.Reflection.RuntimeMethodInfo.Invoke(Object obj  BindingFlags invokeAttr  Binder binder  Object[] parameters  CultureInfo culture)    at Microsoft.Azure.AppService.Middleware.ModuleManager.LoadModuleConfig(HttpContextBase context)    at Microsoft.Azure.AppService.Middleware.ModuleManager.LoadAllModulesAndGetEnabledModules(HttpContextBase context)    at Microsoft.Azure.AppService.Middleware.HttpModuleDispatcher.EnsureInitialized(HttpContextBase context)    at Microsoft.Azure.AppService.Middleware.HttpModuleDispatcher.<DispatchAsync>d__11.MoveNext() 

I've used any other diagnostics tool that I was able to find on Azure but was not able to figure out a lot more except one faint hint: Using the Failed Request Tracing feature I noticed that those traces state that the requests were made to very odd and obviously wrong URL's:

URL according to Request Trace: https://Skillmanagementtest:80

Actually requested URL: https://skillmanagementtest.azurewebsites.net

Screenshot: Failed Request Trace

I have absolute no clue where either this URL or the port comes from; I have never specified port 80 anywhere (which is wrong anyway as its HTTPS). \""Skillmanagementtest\"" is the name I have given the Azure Web App and I don't think that I have used it anywhere else. The Home Page URL and the Reply URL for Authentication (using AAD as Authentication Provider) is set correctly. My assumption is that it is this gibberish URL that causes the UriFormatException but I've got no clue where the URL is coming from...

That said the application seems to start up properly (also things like logging frameworks placing their files on startup works) and starting it up also does not place any errors into the diagnostic logs but whenever a request is made the above error occurs.

Locally and in production (using a \""classical\"" VM not an Azure Web App) the Web App runs without problem but I was not able to spot any difference in configuration between those and the Azure Web App (except of course the different file paths and DB connections). Also given the Stack Trace naming an Azure Middleware suggests that the problem originates from the app being run as Azure Web App and as the only leverage I pretty much have there it oughta be some configuration error...

As a part of setting up the Staging Environment I aim towards having as much as possible automated. Thus I have also set up a CI/CD resp. a Build and a Release Pipeline using Azure DevOps. This seems to work all fine. Configuration-wise I use XML Transformations on the web.config resp. web.Staging.config file. The transformations are all applied correctly.

--

For any ideas on what may be wrong I would be immensly grateful as I myself have pretty much run out of things to try (third full day I am trying to get this to work...)

~ Finrod

""",azure-web-app-service +47674415,Adapting hypermedia links based on environment

I have an on-premise ASP.NET Web API that's querying on-premise data. The plan is for the client web application to call an Azure Function which will deal with authentication (Azure AD B2C) and validating the request before actually forwarding the request to the on-premise API itself (over a VPN).

The API is generating Hypermedia links that point to the API itself. This works nicely when querying the API directly as each of the links helps in the discovery of the application.

This API is currently in use locally within the organisation but we now need to expose it so it can be consumed over the web. We don't want to expose the API directly we'd rather route it through a Function App that can authenticate validate and perform any other logic we may need.

The question I have is how would you translate these URLs to an endpoint in the Azure Function? i.e. I would really like the consuming web application to be able to use these Hypermedia links directly and have the Azure Function route them to the correct API endpoint.

In an ideal world we'd have the links exposed on the client which would map to the resource. As the API isn't exposed how do we route it via the Function App?

,azure-functions +56441174,"PowerShell Remote from VSTS Pipeline

I'd like to invoke PowerShell commands on my VM remotely. I added \Run PowerShell on Target Machines\"" task in my pipeline. I provided: IP username and password of my remote VM. Here's the error that I'm getting:

Unable to create pssession. Error: 'Connecting to remote server failed with the following error message : WinRM cannot complete the operation. Verify that the specified computer name is valid that the computer is accessible over the network and that a firewall exception for the WinRM service is enabled and allows access from this computer. By default the WinRM firewall exception for public profiles limits access to remote computers within the same local subnet. For more information see the about_Remote_Troubleshooting Help topic.'

On my remote VM I did:

Enable-PSRemoting Set-NetFirewallRule -Name \""WINRM-HTTP-In-TCP-PUBLIC\"" -RemoteAddress Any 

These commands were mentioned here: https://docs.microsoft.com/en-us/powershell/module/microsoft.powershell.core/about/about_remote_troubleshooting?view=powershell-6

What else can I do?

""",azure-devops +54277263,azure App service deploy as code or container?

I have apis written in asp .net core which can be easily containerized i want to deploy these apis in azure app service but i am not able to decide whether i should containerize these api and deploy as containers in web app or i can deploy directly as code On what basis this can be decided i see that App service gives scale out capacity for both way of deployment and other factors like continuous deployment also look same so how shall i decide which approach to take or it really doesn't matter in this case?

,azure-web-app-service +37829602,"remote desktop to an azure VM (created by the new portal - portal.azure.com) over the port 443

I have a Virtual Machine created in the new azure portal (portal.azure.com) Now I can connect to by using the Remote Desktop by the port 3389 without any problems.

I am asking for a guide to setting my virtual machine can be remoted over the port 443 also (since the working network just allows outcoming 443 only)

With the classic portal I just need to add an \end point\"" and that works.

However with the new portal in the \""network security group\"" I tried to modify the \""inbound security rules\"" changed the default value 3389 to 443 but I got no luck.

Edited: captured screenshots

\""enter

\""enter

""",azure-virtual-machine +43051445,"A property has a space and Href is ignorning everything after it

I can download an uploaded file to Azure using a direct link seemed the quickest and easiest way to do it. By no means the safest or the smartest. However when I link to it in the Href it ignores all spaces meaning that if a user uploads a document with a space in the name It doesn't search properly. How would I modify my code to replace any Spaces with % instead when searching?

Here is my code; The link clicked to download the file;

  <a href=@ViewBag.LinkToDownload@item.DocumentId@item.RevisionId@item.Attachment>Download</a> 

The controller

 [HttpPost]     [ValidateAntiForgeryToken]     public ActionResult Download(string id)     {         string path = @\https://filestorageideagen.blob.core.windows.net/documentuploader/\"";          return View(path + id);     } 
""",azure-storage +39492140,Build on Azure Web App Error:Can not find runtime target for framework '.NETStandard Version=v1.6'

I have a asp.net core web project that was continuous deployed to Azure well. After I added another .net core class library to the solution it gives the following build error on Azure: Microsoft.DotNet.Publishing.targets(149 5): error : Can not find runtime target for framework '.NETStandard Version=v1.6' compatible with one of the target runtimes: 'win8-x86 win7-x86'.

However the solution builds and runs successfully on local box.

Any one experience that before

Thanks Geoff

,azure-web-app-service +50737410,Understanding Poison Message Handling in Azure Message Queue and using it in Logic Apps

I'm trying to make a workflow using Azure Logic Apps where several Azure Functions are connected. I'm using a blob trigger and im sending its content to the first function then that function sends a http req to the next one and so on. However i would like to make sure that the first function processes it correctly. So i figured i could use Message Queue as it supports Poison Message Handling.

Now the blob trigger puts a new message in a queue which is then proccessed by the first function. I've seen many articles about how i can set retry policies (how many times a message should be proccessed and intervals between retries) however i can't quite find information about how i can use the Poison Message Handling. So my question is:

How are those poison messages handled after exceeding the retry count

Are they just staying in that queue but are marked as poison?

Are they put in some other queue that contains only the poison ones?

How can i take advantage of even finding them? Is it only possible to manage them by hand or can i set up some kind of trigger that fires when a poison message occurs?

I am also wondering if my approach is correct. Is it fine to connect Azure Functions directly to each other in Logic Apps or should each has its own Message Queue? Do i even need the message queue to handle Poison Message or is there a good way to do it in Logic Apps directly (I know that it's possible to set retry settings but i haven't seen anything about automatic poison message handing)

,azure-functions +33841565,"Visual Studio Online Build - Visual Studio SDK and Modellng SDK

I have included Visual Studio SDK and Modeling SDK in my project in order to build my T4 templates. It has been working fine until I wanted to set up the VSO Build which gives me the following error:

The imported project \""C:\\Program Files (x86)\\MSBuild\\Microsoft\\VisualStudio\\v14.0\\TextTemplating\\Microsoft.TextTemplating.targets\"" was not found. Confirm that the path in the <Import> declaration is correct and that the file exists on disk.

However I was unable to find any options or settings to import/include the mentioned SDKs into the build machine.

""",azure-devops +35836600,"Azure Storage Blob Put SSL handshake error

I have a correctly formed URL for the Blob PUT operation using Shared Access Signature:

http://xyz.blob.core.windows.net:80/container/BLOB_NAME?sv=2015-04-05&sr=b&sig=xtpL3M2WRWILarpojLnjlacpIWs41%2BosFWiTtAPGwIE%3D&se=2016-03-07T06%3A00%3A59Z&sp=w

Using Fiddler's Composer I am able to successfully upload data (with \""x-ms-blob-type: BlockBlob\"" header).

However when I change the URL to \""https\"" -- the PUT fails with Status Code 502 and the following message:

[Fiddler] The connection to 'xyz.blob.core.windows.net' failed.
System.Security.SecurityException Failed to negotiate HTTPS connection with server.fiddler.network.https> HTTPS handshake to xyz.blob.core.windows.net (for #21) failed. System.IO.IOException The handshake failed due to an unexpected packet format.

It surely seems like a problem on Azure's end. How could I get this resolved?

P.S. In Chrome this problem manifests as \""net::ERR_SSL_PROTOCOL_ERROR\"". In Edge I get \""XMLHttpRequest: Network Error 0x80070005 Access is denied.\""

""",azure-storage +51315104,Is it possible to upgrade an existing IaaS VM OS image from 2012 to 2016

I have several IaaS Vms deployed with windows 2012 datacenter. Is it possible to upgrade to 2016 without re-creating VM?

,azure-virtual-machine +55163281,SQL - high CPU consumption

(AZURE VM)

We have 1 physical CPU 4 logical processor 1 NUMA node 4 MAXDOP 5 CTOP 56GB ram on the database server. We have 50 databases for our clients and at a particular time not more than 30 users are online. CDC is enabled for all databases(40 tables under CDC)

We have a normal CPU usage of 40-50% and when clients are online usage jumps to 75-80%. We have observed spikes up to 100% CPU usage. For daily average CPU consumption is 60% and maximum 96-100%.

All queries are executing in less than a minute.

Is there anything we can do lower the CPU consumption without impacting performance or this is normal.

,azure-virtual-machine +35954733,distribute third party application on azure virtual machine

I would like to know if there is a process that deploys/distribute third party applications on azure virtual machines. I have these windows applications that will be deployed on newly provisioned virtual machines. May I know if there is a automated process on this one? currently we have our custom template and we want to use images from the marketplace and deploy these applications automatically.

thanks!

,azure-virtual-machine +49210524,"Azure Functions Custom ILogger

Hopefully some people with a much better understanding of Azure Functions than i do can help.

Out of the box you can log to Application Insights by using the APPINSIGHTS_INSTRUMENTATIONKEY setting in your app settings... this will log at a basic level the Function requests and then allow you to do log.LogInformation etc.

This is simply covered off by either a TraceWriter or ILogger.

Issue is though i don't want to store the Key in my config i want to store it in KeyVault along with all other keys for the App. I also want to do some other custom logging so as per this link: https://docs.microsoft.com/en-us/azure/azure-functions/functions-monitoring#custom-telemetry-in-c-functions We can implement a custom TelemetryClient() object to read from KeyVault without much hassle..

However all the nice free logging you get by using ILogger is now gone so i guess what i need to do is somehow inject an Application Insights ILogger into my Function...

Can someone help me understand the limitations here as in how it would be done if it were possible.. i also assume there must be a GitHub open case for it which i would be keen on finding and lending weight to as i can't imagine I'm the only one that has faced this.

""",azure-functions +54149025,"Hide repositories from menu on a project on Azure DevOps

I want to hide two repositories (App 1 and 2 below) from the menu on a project on Azure DevOps.

Select a project -> Repos -> below

\""enter

The reason is that App 1 and 2 have code but are not being used at the moment. So we will show them in the future. Thus we want the ability to show/hide them.

Any idea?

""",azure-devops +11662161,"Log info for unhandled exception during Azure initialization?

I have deployed a site into Azure using VS 2010's Publish function. After VS says the deployment succeeds I go to the old Azure dashboard and For the status it cycles through a few different status' (Initializing Recovering Recycling etc) but all of them have Unhandled Exception at the end. I've seen a few other posts about this type of error but unfortunately they havent resolved the issue for me so now I just want to see the exception.

How do I see what the exception was?

""",azure-storage +56622380,"How to recreate an Agent Pool with the old name in AzureDevOps?

I have troubles creating a new agent pool in AzureDevOps.

What I wanted to do was to remove an old Self-Hosted host and deploy a new one. However the Agent-Pool used by the old host and to be used by the new one was created by a co-worker. This let to the case that I was unable to remove the existing registered agents causing conflicts during deployment of the new host. To resolve this issue I was able to remove the agent pool.

Now when I want to create a new pool with the same name I get the error message

\No agent pool found with identifier 76\"".

Did anybody ever see this error message and or has an idea what I can do about it?

Expected: A new agent pool with the same name as the old pool is created.

Actual: I receive the error message \""No agent pool found with identifier 76\"".

Agent creation Image

Error Message Image

""",azure-devops +57590336,How to copy azure pipeline artifacts to a docker image which is microsoft dotnetcore runtime image

I am building azure devops pipeline for dotnetcore project. After dotnet publish command execution the artifacts get saved in $(artifactStagingDirectory). I want to build a docker image. How to copy these artifacts from artifacts staging directory to docker? I am not building project in dockerfile. I am executing dotnet publish in pipeline.

I have tried to save artifacts to another folder by using:

dotnet publish --configuration $(BuildConfiguration) --output out

,azure-devops +55294689,Delete an Azure Virtual Machine automatically after deployment

To deploy my infrastructure I need to deploy a VM with a custom script extension. The only purpose of the VM is to execute the script. After the execution of the script the VM should be deleted automatically.

How can this be done?

Additional information:

  • This is an azure resource manager deployment
  • the deletion should work in the azure marketplace environment as well.
,azure-virtual-machine +44136929,"WEBSITE_HTTPLOGGING_CONTAINER_URL is a hidden application setting?

When swapping slots we get the following message:

\""enter

But the WEBSITE_HTTPLOGGING_CONTAINER_URL setting does not exist in the Application Settings of either web app. I understand that it comes from enabling Web App Diagnostics Logs but it is somehow hidden.

The issue is that this will cause an IIS restart in the production slot and thus downtime until it has completed all initialisation tasks. There is no way to configure this settings as a \""Slot Setting\"" (which would prevent an edit to the application settings).

The odd thing is that DIAGNOSTICS_AZUREBLOBCONTAINERSASURL is visible in application settings (it is also a Diagnostics Log configuration).

""",azure-web-app-service +54512492,"Azure App Service Kudu Continuous Deployment stopped working not even showing latest commits

Two weeks ago I created 2 App Services one is a Function App and the other is a PHP website... after having them created I went to Deployment Center and configured this option to pull the code from an Azure GIT Repo and build & deploy the code using Kudu everything was working as expected whenever I made a code change and pushed it Kudu was taking the latest commit building and deploying the latest version...

Yesterday it stopped getting the latest commits from the Repo and by the time I tried to re-configure the setup to see if that helps it won't event display the commits anymore.

Here some screenshots showing the Web App and it's continuous deployment setup mapped to the Azure GIT Repo:

WebApp Continuous Deployment Setup

Azure Repo

Does anyone have faced this issue before? What could be potentially causing this situation? Could any quota limit be causing it?

NOTE: I have a Pay-as-you-go subscription.

""",azure-web-app-service +50656132,"Azure SDK API and JAVA usage - Issue Listing all Containers and BLOBS inside a storage account

Issues while using client.listContainers() and container.listBlobs() functions and getting below JAVA exception. Tried to access them individually with reference methods and it works fine. Not sure why this is happening as there are no access restrictions i.e public access and getting the client reference without any issues of Connection String.


Exception:

java.util.NoSuchElementException: An error occurred while enumerating the result  check the original exception for details. java.util.NoSuchElementException: An error occurred while enumerating the result  check the original exception for details. at com.microsoft.azure.storage.core.LazySegmentedIterator.hasNext(LazySegmentedIterator.java:113) at com.company.test.core.handler.AzureHeirarchyGenerator.getAzureFileList(AzureHeirarchyGenerator.java:69) at com.company.test.core.handler.AzureHeirarchyGenerator.main(AzureHeirarchyGenerator.java:18) Caused by: com.microsoft.azure.storage.StorageException: An unknown failure occurred : Connection refused: connect at com.microsoft.azure.storage.StorageException.translateException(StorageException.java:66) at com.microsoft.azure.storage.core.ExecutionEngine.executeWithRetry(ExecutionEngine.java:209) at com.microsoft.azure.storage.core.LazySegmentedIterator.hasNext(LazySegmentedIterator.java:109) ... 2 more Caused by: java.net.ConnectException: Connection refused: connect at java.net.DualStackPlainSocketImpl.waitForConnect(Native Method) at java.net.DualStackPlainSocketImpl.socketConnect(DualStackPlainSocketImpl.java:85) at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:350) at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:206) at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:188) at java.net.PlainSocketImpl.connect(PlainSocketImpl.java:172) at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392) at java.net.Socket.connect(Socket.java:589) at sun.security.ssl.SSLSocketImpl.connect(SSLSocketImpl.java:673) at sun.net.NetworkClient.doConnect(NetworkClient.java:175) at sun.net.www.http.HttpClient.openServer(HttpClient.java:463) at sun.net.www.http.HttpClient.openServer(HttpClient.java:558) at sun.net.www.protocol.https.HttpsClient.<init>(HttpsClient.java:264) at sun.net.www.protocol.https.HttpsClient.New(HttpsClient.java:367) at sun.net.www.protocol.https.AbstractDelegateHttpsURLConnection.getNewHttpClient(AbstractDelegateHttpsURLConnection.java:191) at sun.net.www.protocol.http.HttpURLConnection.plainConnect0(HttpURLConnection.java:1156) at sun.net.www.protocol.http.HttpURLConnection.plainConnect(HttpURLConnection.java:1050) at sun.net.www.protocol.https.AbstractDelegateHttpsURLConnection.connect(AbstractDelegateHttpsURLConnection.java:177) at sun.net.www.protocol.http.HttpURLConnection.getInputStream0(HttpURLConnection.java:1564) at sun.net.www.protocol.http.HttpURLConnection.getInputStream(HttpURLConnection.java:1492) at java.net.HttpURLConnection.getResponseCode(HttpURLConnection.java:480) at sun.net.www.protocol.https.HttpsURLConnectionImpl.getResponseCode(HttpsURLConnectionImpl.java:347) at com.microsoft.azure.storage.core.ExecutionEngine.executeWithRetry(ExecutionEngine.java:115) ... 3 more 

Code Snippet JAVA:

    CloudStorageAccount account = CloudStorageAccount.parse(azureConnectionString);     CloudBlobClient client = account.createCloudBlobClient();     // List all containers of a storage account     Iterable<CloudBlobContainer> containers = client.listContainers();     for (CloudBlobContainer cloudBlobContainer : containers) {         Iterable<ListBlobItem> blobs = cloudBlobContainer.lisBlobs();         System.out.println(\Code to fetch blobs inside container\"");     } 
""",azure-storage +52890991,Azure App Service: Converting Staging Slot into Standalone App

I have an Azure App with a Staging Slot. I'm trying to find a way to convert my Staging Slot into a normal Standalone App

Is there any way to achieve this?

Why do I want to do this?

Because I want to scale down my app from S1->B1

Since B1 doesn't support the deployment slots now I'm completely blocked from scaling down my App without deleting and recreating it.

,azure-web-app-service +35872354,How to startup apps added in Azure VM

I have a simple console application runs in Azure VM. What I did is creating a Local Group Policy for this app but it works in the background process.

Is it possible to run the app on windows startup normally. Not in the background??

,azure-virtual-machine +51499968,VSTS - External Git

I've tried to get VSTS to connect to our enterprise Git repository.

To do this I had to get our firewall opened up and as a result we found that VSTS does not connect to our network using the VSTS domain ie ########.visualstudio.com

Instead it connects using the IP address of the build agent which is in the range specified in the Azure Public IP list.

Does anyone know if this is a bug on MS's part? We could free up our firewall to all the Azure Public IP's but this is very fragile (they can/will change) and presents a significant security risk as anyone using Azure could attempt to connect to our Git repository.

Interestingly if you install a private build agent then VSTS connects to this using the VSTS domain we have observed this.

,azure-devops +44195652,Disconnect with Azure ACS form Local Machine
  • I had pull my azure acs credentials using below command and I can communicate with kubernetes machine on Azure from my local machine az acs kubernetes get-credentials --resource-group=<cluster-resource-group> --name=<cluster-name>
  • But Now I wanted to disconnect this connection so that my kubctl can connect with other machine it can be local or any other machine (I am trying to connect with local).

  • But everytime I ran kubectl command it communicate with Azure ACS

,azure-virtual-machine +44577592,Azure load balancer with single VM

I've couple of VMs under same subnet in Azure without any availability set attached to any of the vm. When I try to use load balancer (public) it shows two options to load balance VMs

  1. Availability set
  2. Single VM

Is it necessary to use availability set to implement load balancer between VMs and they should be under same availability set?

Also if above it true what is the purpose of single vm option where vm is visible to add under load balancer without any availability set?

,azure-virtual-machine +47652865,"azure webapps 404 error

I have web application developed in Spring boot (REST services) which is deployed on Azure webapps (Azure app service). My plan is Standard: 1 Small.

The application has been running smoothly since 2 weeks. All of a sudden today the application went down. Consumer application which calls these REST services started experiencing 404 errors (The origin server did not find a current representation for target resource or it is not willing to disclose that one exists)

When I check the logs I did not find any root cause which will bring whole application down. This was second time it happened and this time too I am unable to find root cause (Memory usage/CPU usage seems fine). \Always on\"" setting is turned ON.

I have following questions: 1) What may be the root cause and Is there a way to find it ?

2) Is there way (in azure webapps) to know when application goes down and auto scale ? (I have already set auto scale rules for CPU usage and Memory usage but this did not help).

""",azure-web-app-service +27751414,"Blob.getCopyState() returning null

Is this function not implemented in the java sdk? It appears to always return null. I am copy one page blob to another and want to track the status of the copy.

    CloudPageBlob srcBlob = container.getPageBlobReference(\source.vhd\"";     String newname=\""dst.vhd\"";     CloudPageBlob dstBlob = container.getPageBlobReference(newname);     dstBlob.startCopyFromBlob(srcBlob);      //Get the blob again for updated state     dstBlob = container.getPageBlobReference(newname);     CopyState state = dstBlob.getCopyState(); 

Is there any other way to get status? I am using azure-storage-1.2.0.jar

""",azure-storage +55811118,"Deeplink to create branch from external site

I come from a Jira/Bitbucket environment where I could create a branch from within Jira. Bitbucket has a 'deeplink' that opens a dialog to create a branch:

http://[bitbucket]/plugins/servlet/create-branch?issueKey=[BugID]&issueType=Bug&issueSummary=[This+is+the+summary]

This allows an external app to call this URL (from a template) and start the 'create new branch' dialog in the browser. In this dialog the user can select the source repo / branch if other than default.

How can this be done in Azure DevOps?

In Azure DevOps wherever I click on the 'create branch' button it comes with a popup.

""",azure-devops +41403714,Azure cloud service network bandwith

I have an Azure web role on 3 Standard_A3s. The web api has a heavy load and it transfers lot of large objects. I am trying to find out at what RPS we might hit network bandwith limit. Is there a way to find this out.

,azure-virtual-machine +51032742,Azure Functions Docker Deployment Linux workers are not available in resource group

Trying to deploy a nginix container from Azure Container Registry through function app

Getting an error as

Linux workers are not available in resource group

How to enable linux workers to a resource group?

Dockerfile for deployment

FROM nginx COPY dist /usr/share/nginx/html 
,azure-functions +35893929,"How can I deploy a html website using Azure Resource Manager

I've got a website (basic html) and I want to deploy it using Azure Resource manager. It doesn't have a visual studio sln file and I don't want to create one.

I've found this tutorial for an angular website that does something along the lines that I am trying to do. http://www.azurefromthetrenches.com/how-to-publish-an-angularjs-website-with-azure-resource-manager-templates/

The problem I want to solve is that I have the Microsoft Azure SDK for .NET (VS 2015) 2.8.2 which allows me to add resources to my resource group project. The tutorial writes everything itself rather than use visual studio to create the resources.

Does any one know how to do this?

I've got my application to build the website using a website.publishproj (found at the tutorial) so I have my zip file what I am now lacking is how to upload the zip file to azure using the already existing powershell that comes with the 2.8.2 SDK.

So far i've added the below code under the Import-Module statement:

C:\\\""Program Files (x86)\""\\MSBuild\\14.0\\bin\\msbuild.exe 'C:\\Source\\website.publishproj' /T:Package /P:PackageLocation=\"".\\dist\"" /P:_PackageTempDir=\""packagetmp\"" $websitePackage = \""C:\\Source\\dist\\website.zip\"" 
""",azure-web-app-service +16122160,"Uploading images to azure blob storage with windows phone 7

I have been trying for the longest while to find a way to upload images using windows azure blob storage with windows phone. I tried using windows azure storage client for windows phone but that does not seem to be working any more. I also tried about 2 other nuget packages and I also tried a web service that will handle the upload for but that didn't work out well as I wasn't completely how I should connect the web service to my phone app. I tried also uploading it using windows azure mobile service. Could anyone give me some pointers on how I should go about getting this to work?

Problem with windows azure storage client for windows phone

Install-Package Phone.Storage -Version 1.0.1 Attempting to resolve dependency 'SilverlightActivator (≥ 1.0.3)'. 'Phone.Storage 1.0.1' already installed. ProjectName already has a reference to 'Phone.Storage 1.0.1'.

But no reference added to visible in reference folder

Problem with mobile services Tried this: http://www.windowsazure.com/en-us/develop/mobile/tutorials/upload-images-to-storage-dotnet/

but could not find a Storage client library for Windows phone 7

""",azure-storage +52147350,asp.net core 2.0 console application filenotfoundexception system.runtime 4.2.0.0

I created a simple console application using asp.net core 2.0. I published it from vs2017 to a folder and copied the contents to azure vm folder. when i run the console application i am getting the following error:

Unhandled Exception: System.IO.FileNotFoundException: Could not load file or ass embly 'System.Runtime  Version=4.2.0.0  Culture=neutral  PublicKeyToken=b03f5f7f 11d50a3a' or one of its dependencies. The system cannot find the file specified. 

Can anyone help me with this?

Thanks

,azure-virtual-machine +57195219,"Unable to select GitHub organization in Azure App Service Deployment Center

I'm trying to set up GitHub deployments for an ASP.NET Core web application in Azure App Service Deployment Center.

I'm stuck at the step where I'm supposed to select my GitHub organization - the dropdown is empty:

\""Deployment

The help says to check Azure App Service permissions on GitHub. However everything looks fine in my GitHub Setttings > Applications > Authorized OAuth Apps > Azure App Service (these permissions were automatically set up when I selected GitHub as Source Control in the Deployment Center):

\""GitHub

I wasn't able to find any other relevant settings page neither in the global GitHub settings nor on the specific repository.

""",azure-web-app-service +42757214,Random port number appended to URL when browsing Web App in Azure

Whenever I browse to an Azure Web App I've published a random port number (e.g. 44893) is appended to the URL resulting in a Page cannot be displayed error. The web app uses HTTPS.

What causes this?

,azure-web-app-service +43749592,"Azure Web App returning wrong SSL certificate

We have configured one IP-based SSL certificate for our app (e.g. mydomain.com) and a number of SNI certificates for customers to use at custom domains (e.g. www.theirdomain.com) plus a www for our site (www.mydomain.com). Those domains have CNAME records pointing to oursite.azurewebsites.net. We have been configured this way for quite a while with the exception of recently changing the spoketraining.com from SNI to IP because something over the weekend made that stop working.

We are suddenly having an issue where users are getting the wrong SSL certificates when they make requests from one of the CNAME URLs. They go to https://www.theirdomain.com and get the certificate for https://www.ourdomain.com. In Chrome this gives an ERR_CERT_COMMON_NAME_INVALID. In Edge they get \""There's a problem with this website's security certificate.\"" Most users get it all the time but it's not consistent. In our testing we see that it mostly fails the first few times you go to the site but then it may load part of the page and reject the API calls and then it may work completely. Going to an incognito window usually makes it start again. When it does work the browser shows the right certificate and that everything is good.

The way we've configured should work right? Is there something more we should be doing to make this work?

""",azure-web-app-service +23696914,Getting upload progress using Azure Storage in Android

I'm uploading a file in my Android application. The code is pretty simple:

    private boolean UploadFile(String fileLocation) {         try {              if (TextUtils.isEmpty(fileLocation)) {                 return false;             }              File fSrc = new File(fileLocation);              if (!fSrc.exists()) {                 return false;             }              boolean bReturn = AzureManager.init(this);             if (!bReturn) {                 return false;             }              String blobName = fSrc.getName();              InputStream in = new BufferedInputStream(new FileInputStream(fSrc));             CloudBlobContainer container = AzureManager.getCloudBlobClient().getContainerReference(AzureManager.getContainerName());             CloudBlockBlob blob = container.getBlockBlobReference(blobName);             blob.upload(in  fSrc.length());              in.close();             return true;         } catch (Exception e) {             //handle exception         }         return false;     } 

When I download from Azure CloudBlockBlob has a download listener as:

blob.setDownloadListener(eventListener); 

But how can I keep track of the progress when uploading?

,azure-storage +46154758,"serverless with azure functions and webpack

I'm wondering if there is anyone using serverless framework with azure functions and how you handle sharing code across functions & bundling?

I'm converting hapi.js app to serverless + serverless-azure-functions and I'm trying to bundle my code before deploying so I can use various require for reusable modules.

I found serverless-webpack and It create bundles that probably works on AWS Lambda but there is a problem on azure because of lack of function.json files (ex. list-function.json) so the functions aren't visible at all inside azure-portal nor I can't invoke them.

Also found article about this problem but It shows how to handle this with azure-functions-cli which only support Windows platform.

Best JH

""",azure-functions +39430279,"How can I do Routing in Azure Functions?

I know that I can use url parameters like this:

\myfunction?p=one&p2=two\"" 

and in code that becomes

request.query.p = \""one\""; 

but I would prefer to get it like this (express routing style):

\""myfunction/:myvalue\"" 

and use this url:

/myfunction/nine 

which becomes this in code:

request.params.myvalue = \""nine\"" 

but I can't seem to find how to configure a route path like that in Azure Functions any ideas or docs?

""",azure-functions +21356104,"TfvcTemplate.12.xaml missing in Source Control

I'm using the the hosted TFS from Microsoft at VisualStudio.com via VS2012

When I create a new Build Definition I'm presented with the following templates choose from:

\""enter

If I select TfVcTemplate.12.xaml

my build comes up with the following warnings:

\""enter

So I'm thinking that I probably need to set some defaults in the template and everything should be fine however when I go to Build Process Templates in the root of TFS Project I see can't see the template. No amount of get latest/shutdown/restart etc will bring it up.

\""enter

""",azure-devops +48955049,"VS Cloud Load Test - Could not locate directory/file

I'm trying to load test our application at work and I have created a web-test (coded web-test) that works perfectly locally.

It uses a helper class to create data that's required for the application like name email etc (which must be unique for each application).

Name is returned by a method that resides in helper class as an object of Name class which is pretty basic contains 2 props First and Last.

public static Name GetRandomName() {     // if (!File.Exists(@\..\\..\\..\\Apps-Load-Performance-Tests\\Data Files\\fNames_1.csv\"")) return new Name();      var allLines = File.ReadAllLines(@\""..\\..\\..\\Apps-Load-Performance-Tests\\Data Files\\fNames_1.csv\"");     var maxLength = allLines.Length;     var random = new Random();     return new Name     {         First = allLines[random.Next(maxLength)]          Last = allLines[random.Next(maxLength)]     }; } 

Problem is when I run a load test via Visual Studio cloud - it throws FileNotFoundException (fNames_1.csv)

In my test settings - I have 'Enable Deployment' checked and added the .csv file and the directory that contains the .csv file... but that doesn't seem to solve the problem.

I also tried adding [DeploymentItem()] attribute but no go...

What am I doing wrong? Any help or if someone can point me to right direction - I'd highly appreciate it.

Thanks in advance!

""",azure-devops +50488735,Exception during publishing web app to Azure

We have created a sample Asp.net Core application. When we tried to publish the application into Azure we got following exception.

Web deployment task failed. (The type initializer for 'Microsoft.Web.Deployment.DeploymentManager' threw an exception.)

We have tried some solutions posted in some blogs but none got the issue resolved.

,azure-web-app-service +52668098,"Azure V1 function on CosmosDB change feed triggers all changes when published

I have a function app that connects to the cosmosDB change feed and it works well but I have an issue that when I publish the app it processes changes for all documents currently in the monitored collection which seems wrong

The function is initialised as follows

 [FunctionName(\Function1\"")]         public static async Task RunAsync([CosmosDBTrigger(             databaseName: \""XXX\""              collectionName: \""YYY\""              ConnectionStringSetting = \""CosmosDb\""              LeaseCollectionName = \""leases\""  LeaseCollectionPrefix = \""cloud\"")]IReadOnlyList<Document> documents  TraceWriter log)         { } 

the only change I did was to change the LeaseCollectionPrefix could that cause the trigger to receive changes for all documents in the collection because its seen as a new lease?

""",azure-functions +27185330,Upgrading from A-series to D-series Azure virtual machine

We have SQL Sever setup on A-series virtual machines. We are wanting to upgrade to the D-series virtual machine. Is it as simple as just upgrading the VM in Azure and clicking save or are there any other things I need to watch out for? I have heard of people having issues upgrading due to the level not being available in the cluster that their Virtual Machines sit in.

,azure-virtual-machine +15760152,"Windows Server 2008 R2 SP1 is not pinged

I created a VM on Windows Azure. My actions

1) \File and Printer Sharing (Echo Request - ICMPv4-In)\"" set On

2) Windows Wirewall set Off

It still does not ping.

UPD: The network is working properly.

""",azure-virtual-machine +55826682,"App Settings not being observed by Core WebJob

I have a Core WebJob deployed into an Azure Web App. I'm using WebJobs version 3.0.6.

I've noticed that changes to Connection Strings and App Settings (added via the Azure web UI) are not being picked up immediately by the WebJob code.

This seems to correlate with the same Connection Strings and App Settings not being displayed on the app's KUDU env page straight away (although I acknowledge this may be a red herring and could be some KUDU caching thing which I'm unaware of).

I've deployed a few non-Core WebJobs in the past and have not come across this issue so wonder if it's Core related? Although I can't see how that might affect configs showing up KUDU though.

I was having this issue the other day (where the configs were not getting picked up by the WebJob or shown in KUDU) and was getting nowhere so left it. When I checked back the following day the configs were now correctly showing in KUDU and being picked up by the WebJob. So I'd like to know what has happened in the meantime which means the configs are now being picked up as expected.

I've tried re-starting the WebJob and re-starting the app after making config changes but neither seem to have an effect.

It's worth also noting that I'm not loading appSettings.json during the program setup. That being said the connection string being loaded was consistenly the connection string from that file i.e. my local machine SQL Server/DB. My understanding was always that the anything in the Azure web UI would override any equivalent settings from config files. This post from David Ebbo indicates that by calling AddEnvironmentVariables() during the setup will cause the Azure configs to be observed but that doesn't seem to be the case here. Has this changed or is it loading the configs from this file by convention because it can't see the stuff from Azure?

Here's my WebJob Program code:

    public static void Main(string[] args)     {       var host = new HostBuilder()       .ConfigureHostConfiguration(config =>       {         config.AddEnvironmentVariables();       })       .ConfigureWebJobs(webJobConfiguration =>         {           webJobConfiguration.AddTimers();           webJobConfiguration.AddAzureStorageCoreServices();         }       )       .ConfigureServices((context  services) =>       {         var connectionString = context.Configuration.GetConnectionString(\""MyConnectionStringKey\"");          services.AddDbContext<DatabaseContext>(options =>           options             .UseLazyLoadingProxies()             .UseSqlServer(connectionString)         );          // Add other services       })       .Build();        using(host)       {         host.Run();       }     } 

So my questions are:

  • How quickly should configs added/updated via the Azure web UI be displayed in KUDU?
  • Is the fact they're not showing in KUDU related to my Core WebJob also not seeing the updated configs?
  • Is appSettings.json getting loaded even though I'm not calling .AddJsonFile(\""appSettings.json\"")?
  • What can I do to force the new configs added via Azure to be available to my WebJob immediately?
""",azure-web-app-service +53396173,Azure Service Bus in Azure Function

I'm using Service Bus trigger in Azure Functions v2.0. In the previous version i have used Brokered Message and there are no problems with this. But as i moved to v2.0 i need to use Message instead of Brokered Message. And once i called

await queueClient.CompleteAsync(message.SystemProperties.LockToken); 

i get an exception which says:

The lock supplied is invalid. Either the lock expired or the message has already been removed from the queue or was received by a different receiver instance. I have configured my Queue Client as follows:

var queueClient = new QueueClient(serviceBusString  MessageQueueName); 

Does anyone face this issue? Are there any workarounds ?

,azure-functions +12827059,"Accessing Windows Azure Queues from client side javascript/jquery

For a UI feature I need to read from a Windows Azure queue and update the UI accordingly.

I see plenty of node.js examples but nothing using pure Javascript or Jquery. (azureQuery comes close but no queue functionality yet and it needs a Web API to talk to)

This is a hybrid web app using both asp.net and MVC 4. This particular page is generated using MVC 4.

Any suggestions would be appreciated.

Roberto (PS. being able to write to the queue would also be nice)

""",azure-storage +42201570,how to set retention period for azure storage tables?

I have a azure table used for metric data collection and I want to set some retention period for eg: if retention period is 7 then table should have last 7 day data in it.

Is there any option available.

,azure-storage +56273370,JavaScript HTTP Trigger Azure fuction

I am trying to create a new Azure function (HTTP Trigger) There used to be an option to choose language but now it seems to default to C# and I'f like to create a JavaScript function.

I tried deleting the C# files and replacing them with JavaScript but that then gives me a 404 when I try to run. Any idea how I can create a JavaScript function?

,azure-functions +8541242,"Azure Website with User Editable Content

I have an ASP.Net MVC website running on a shared IIS7 host that allows users to create their own landing page. The website allows users to edit the content style (edit css via a UI) as well as uploading images.

I am considering migrating to Windows Azure to improve scalability and improve database backups (using SQL Azure Data Sync see http://social.technet.microsoft.com/wiki/contents/articles/sql-azure-backup-and-restore-strategy.aspx I am limited in the SQL backup plans offered with my host)

One stumbling block is since the clients can upload images and edit css files these files will need to be stored in the blob storage or the database (any other options?). I don't want to use the database because database storage as it is more expensive.

However if these files are stored in blob storage how will this impact the performance of the website given the files (css images) are fetched from the blob storage instead of being read from the same disk as the website? I know browser caching will reduce the requests for these files but what about first time requests?

""",azure-storage +52760789,Azure App Service: Method from assembly does not have an implementation at

We have a WebAPI which runs locally under IIS but which errors out when deployed to an Azure App Service. This API used to work until we upgraded from StackExchange.Redis 1.2 to 2+. I copied the code from the App Service and ran it under local IIS which worked as expected. Any ideas what might be causing this?

Method 'ExecuteAuthorizationFilterAsync' in type  'ProjectAlpha.Api.Filters.TokenAuthorizationFilterAttribute' from assembly  'ProjectAlpha.Api  Version=1.0.0.0  Culture=neutral  PublicKeyToken=null'  does not have an implementation.at System.Web.HttpApplicationFactory. EnsureAppStartCalledForIntegratedMode(HttpContext context  HttpApplication  app)at System.Web.HttpApplication.RegisterEventSubscriptionsWithIIS(IntPtr  appContext  HttpContext context  MethodInfo[] handlers)at  System.Web.HttpApplication.InitSpecial(HttpApplicationState state   MethodInfo[] handlers  IntPtr appContext  HttpContext context)at  System.Web.HttpApplicationFactory.GetSpecialApplicationInstance(IntPtr  appContext  HttpContext context)at  System.Web.Hosting.PipelineRuntime.InitializeApplication(IntPtr appContext)  Method 'ExecuteAuthorizationFilterAsync' in type  'ProjectAlpha.Api.Filters.TokenAuthorizationFilterAttribute' from assembly  'ProjectAlpha.Api  Version=1.0.0.0  Culture=neutral  PublicKeyToken=null'  does not have an implementation.at  ProjectAlpha.Api.WebApiApplication.Application_Start() 
,azure-web-app-service +54782922,How to configure mail on Azure SQL Database

We are moving from Microsoft SQL Server 2012 (SP1) - 11.0.3128.0 (X64) to Microsoft SQL Azure (RTM) - 12.0.2000.8

Previously we send mail from database mail if a particular table have not updated in a particular time interval.

But i tried to do the same on Azure mail but seems like this functionality is not available on this.

,azure-storage +54135528,Azure storage queue max concurrent clients

I have an azure function with a servicebus trigger. I only want x numbers og function instances to run concurrently. This is done with the maxConcurrentCalls=x in the host file. Can this also be achieved with Azure Storage Queues?

,azure-functions +47431410,"Sub classing MultipartStreamProvider fails in Visual Studio c#?

I am trying to subclass MultipartStreamProvider in Visual Studio

System.TypeLoadException: Method 'GetStream' in type       'Uploadfunction.InMemoryMultipartFormDataStreamProvider' from assembly       'Uploadfunction  Version=1.0.0.0  Culture=neutral  PublicKeyToken=null'       does not have an implementation.   in Microsoft.NET.SDK.Functions.Build.targets line 31 

My line 31 is like this

        <BuildFunctions   TargetPath=\$(TargetPath)\""   OutputPath=\""$(TargetDir)\""/> 

I have System.net Assemblies and have also downloaded Microsoft.net.sdk.functions (1.0.6).

My function class looks like this -

public static class UploadFunction {     [FunctionName(\""UploadFunction\"")]     public static async Task<HttpResponseMessage> Run([HttpTrigger(AuthorizationLevel.Anonymous  \""get\""  \""post\""  Route = null)]HttpRequestMessage req  TraceWriter log)     {       //Check if the request contains multipart/form-data.         if (!req.Content.IsMimeMultipartContent(\""form-data\""))         {             return req.CreateResponse(HttpStatusCode.UnsupportedMediaType);         }        var provider = await req.Content.ReadAsMultipartAsync(new InMemoryStream());      return req.CreateResponse(HttpStatusCode.OK  \""File Uploaded\"");   } 

And my InMemoryStream class looks like this.

public class InMemoryStream : System.Net.Http.MultipartStreamProvider {     public override Stream GetStream(HttpContent parent  HttpContentHeaders headers)     {          MemoryStream s = new MemoryStream();         return s;     }   } 
""",azure-functions +52746188,"Rewriting default azurewebsites to custom domains

I have few web apps (REST service that uses swagger)) hosted on Azure with their default domains as https://xyz-test.azurewebsites.net/swagger/ui/index. I need to rewrite them to custom domain (already exists) example: newdomain.net I tried the adding the following to web.config:

<rewrite>   <rules>     <rule name=\""Redirect requests to default azure websites domain\"" stopProcessing=\""true\"">       <match url=\""(.*)\""/>       <conditions logicalGrouping=\""MatchAny\"">         <add input=\""{HTTP_HOST}\"" pattern=\""^xyz\\.azurewebsites\\.net$\""/>       </conditions>       <action type=\""Rewrite\"" url=\""https://newdomain.net/{R:0}\"" appendQueryString=\""true\"" redirectType=\""Permanent\""/>     </rule>        </rules> </rewrite> 

This code redirects to newdomain.net however if I try to do newdomain.net/swagger it would throw the 404 error. I'm new to doing this.

What should be my correct approach so that instead of using https://xyz-test.azurewebsites.net/swagger/ui/index I should be able to use https://newdomain.net/xyz-test/swagger/ui/index to make API calls

(note: mydomain.com will be used by more than one service. example: https://newdomain.net/abc-test/swagger/ui/index)

""",azure-web-app-service +48443690,"How to resolve AbstractMethod Error in blob encryption?

I want to upload blob into azure blob storage by applying encryption for it. so i have tried to do it using following code:

 File f=new File(\/home/prospera-user15/Desktop/test/download.jpeg\"");          CloudStorageAccount account = CloudStorageAccount.parse(storageConnectionString);         CloudBlobClient serviceClient = account.createCloudBlobClient();         // Container name must be lower case.         CloudBlobContainer container = serviceClient.getContainerReference(\""upload1\"");         container.createIfNotExists();         CloudBlockBlob blob = container.getBlockBlobReference(\""megha\"");         final KeyPairGenerator keyGen = KeyPairGenerator.getInstance(\""RSA\"");         keyGen.initialize(2048);         final KeyPair wrapKey = keyGen.generateKeyPair();          RsaKey key = new RsaKey(\""RSA\"" wrapKey);         System.out.println(\""Uploading the encrypted blob.\"");         BlobEncryptionPolicy policy = new BlobEncryptionPolicy(key  null);         BlobRequestOptions options = new BlobRequestOptions();         options.setEncryptionPolicy(policy);         AccessCondition accessCondition = null;         OperationContext opContext = null;         try{             blob.upload(new FileInputStream(f)  f.length()  accessCondition  options  opContext);         }catch (IOException e) {             System.out.println(e.getMessage());         }catch (StorageException e) {             System.out.println(e.getErrorCode());         } 

\""for

""",azure-storage +56179837,"How to get / generate access token from azure OAuth 2.0 token (v2) endpoints?

I want to get an access token from given URL:

https://login.microsoftonline.com/{AzureTenantId}/oauth2/v2.0/token 

I am passing the following parameters as mentioned int he Microsoft docs: client_id scope client_secret grant_type.

When I hit this URL I get a \400 Bad Request\"" response.

When I try the same from Postman it succeeds and provides me an access token:

\""success

But not from my code:

public async Task<string> GetAuthorizationToken(string clientId  string ServicePrincipalPassword  string AzureTenantId) {             var result = \""\"";             var requestURL = \""https://login.microsoftonline.com/{AzureTenantId}/oauth2/v2.0/token\"";             var _httpClient = new HttpClient();              var model = new {                 client_id = clientId                  scope = \""{clentID}/.default\""                  client_secret = ServicePrincipalPassword                  grant_type = \""client_credentials\""             };              HttpContent httpContent = new StringContent(JsonConvert.SerializeObject(model)  System.Text.Encoding.UTF8  \""application/x-www-form-urlencoded\"");              var httpRequestMessage = new HttpRequestMessage(HttpMethod.Post  new Uri(requestURL)) {                 Content = httpContent             };              using (var response = await _httpClient.SendAsync(httpRequestMessage)) {                 if (response.IsSuccessStatusCode) {                     var responseStream = await response.Content.ReadAsStringAsync();                     return result;                 } else {                     return result;                 }             } 
""",azure-devops +49120735,Azure load balancer with NAT rule hiding port for RDP

I have internet facing Azure load balancer with public static IP (call it PIP) and I added a NAT rule - forward TCP port 12345 to local (subnet's IP) 10.2.2.2:3389 (VM that doesn't have public IP). And I'm trying to set NSG for subnet and VM's NIC.

subnet's NSG rules (all TCP):

  • 100: Source PIP:* => 10.2.2.2:3389 (from load balancer IP to VM's local IP)
  • 120: Source Internet:12345 => 10.1.2.4:3389

VM's NSG rules:

  • 100: PIP:* => 10.2.2.2:3389

and here's the problem: if I use Network Watcher's IP flow verify and set local IP to 10.2.2.2:3389 Remote IP:[PIP:12345] I get green light. Same with setting both ports (local and remote) to 3389. But when I'm trying to Remote Desktop to that VM from outside I get a connection error!

I have no idea why. The VM is up and running all good here.

,azure-virtual-machine +47217600,"String reference not set to an instance of a String. Parameter name: s

After upgrading from VS 2013 Update 4 to Update 5 I am using VSTS 2015. We get the following error in Visual Studio 2013: \String reference not set to an instance of a String. Parameter name: s\""

It happens all the time when doing any of the following in Team Explorer: * Click \""Home\"" \""Refresh\"" then \""Builds\"" * Click \""Home\"" \""Pending changes\"" giving this error message.

""",azure-devops +57490505,"Query Azure SQL Database from local Azure Function using Managed Identities

I want to query an Azure SQL Database from an Azure Function executing on my machine in debug using Managed Identities (i.e. the identity of my user connected to Visual Studio instead of providing UserId and Password in my connection string).

I followed this tutorial on Microsoft documentation so my Azure SQL Server has an AD user as admin which allowed me to give rights (db_datareader) to an Azure AD group I created with my Azure Function Identity and my user in it (and also my Function App deployed in Azure).

If I deploy and run in Azure my Azure Function it is able to query my database and everything is working fine. But when I run my Azure Function locally I have the following error :

Login failed for user 'NT AUTHORITY\\ANONYMOUS LOGON'.

The code of my function is the following:

    public async Task<IActionResult> Run(         [HttpTrigger(AuthorizationLevel.Function  \""get\""  \""post\""  Route = \""test\"")] HttpRequest req          ILogger log)     {         log.LogInformation(\""C# HTTP trigger function processed a request.\"");           using (var connection = new SqlConnection(Environment.GetEnvironmentVariable(\""sqlConnectionString\"")))         {             connection.AccessToken = await (new AzureServiceTokenProvider()).GetAccessTokenAsync(\""https://database.windows.net\"");             log.LogInformation($\""Access token : {connection.AccessToken}\"");             try             {                 await connection.OpenAsync();                 var rows = await connection.QueryAsync<Test>(\""select top 10 * from TestTable\"");                 return new OkObjectResult(rows);             }             catch (Exception e)             {                 throw e;             }          }     } 

The code retrieves a token correctly the error occurs on line await connection.OpenAsync().

If I open the database in Azure Data Studio with the same user than the one connected to Visual Studio (which is member of the AD group with the rights on the database) I can connect and query the database without any issue.

Is it a known issue or am I missing something here ?

""",azure-functions +53412259,"Azure Function Docker not working with http trigger

Recently I have created a docker image with Azure Function (Node) having HttpTrigger. This is a basic HttpTrigger which generate by default. I'm developing this on Macbook Pro (MoJave) and I have following tools installed.

NodeJs - node/10.13.0 .NET Core 2.1 for macOS Azure Function core tools (via brew)

When I run the function locally with \func host start\"" it all works fine and I could see the function loading messages. Also I was able to execute the Azure function with trigger endpoint.However when I try to build the Docker container and run the same I can load the home page of the app but could not reach the function endpoint. In the log I could only see following;

Hosting environment: Production Content root path: / Now listening on: http://[::]:80 Application started. Press Ctrl+C to shut down. 

My Docker file is as below (generated by Azure core tools);

FROM mcr.microsoft.com/azure-functions/node:2.0 ENV AzureWebJobsScriptRoot=/home/site/wwwroot COPY . /home/site/wwwroot 

When I try to to use 'microsoft/azure-functions-runtime:v2.0.0-beta1' as base image then I can see the function loading and could able to access the http trigger also.

Is there anything missing or do I need to use a different image?

""",azure-functions +56394509,"Azure container fails to configure and then 'terminated'

I have a Docker container with an ASP.NET (.NET 4.7) web application. The Docker image works perfectly using our local docker deployment but will not start on Azure and I cannot find any information or diagnostics on why that might be.

From the log stream I get

31/05/2019 11:05:34.487 INFO - Site: ip-app-develop-1 - Creating container for image: 3tcsoftwaredockerdevelop.azurecr.io/irs-plus-app:latest-develop. 31/05/2019 11:05:34.516 INFO - Site: ip-app-develop-1 - Create container for image: 3tcsoftwaredockerdevelop.azurecr.io/irs-plus-app:latest-develop succeeded. Container Id 1ea16ee9f5f128f14246fefcd936705bb8a655dc6cdbce184fb11970ef7b1cc9 31/05/2019 11:05:40.151 INFO - Site: ip-app-develop-1 - Start container succeeded. Container: 1ea16ee9f5f128f14246fefcd936705bb8a655dc6cdbce184fb11970ef7b1cc9 31/05/2019 11:05:43.745 INFO - Site: ip-app-develop-1 - Application Logging (Filesystem): On 31/05/2019 11:05:44.919 INFO - Site: ip-app-develop-1 - Container ready 31/05/2019 11:05:44.919 INFO - Site: ip-app-develop-1 - Configuring container 31/05/2019 11:05:57.448 ERROR - Site: ip-app-develop-1 - Error configuring container 31/05/2019 11:06:02.455 INFO - Site: ip-app-develop-1 - Container has exited 31/05/2019 11:06:02.456 ERROR - Site: ip-app-develop-1 - Container customization failed 31/05/2019 11:06:02.470 INFO - Site: ip-app-develop-1 - Purging pending logs after stopping container 31/05/2019 11:06:02.456 INFO - Site: ip-app-develop-1 - Attempting to stop container: 1ea16ee9f5f128f14246fefcd936705bb8a655dc6cdbce184fb11970ef7b1cc9 31/05/2019 11:06:02.470 INFO - Site: ip-app-develop-1 - Container stopped successfully. Container Id: 1ea16ee9f5f128f14246fefcd936705bb8a655dc6cdbce184fb11970ef7b1cc9 31/05/2019 11:06:02.484 INFO - Site: ip-app-develop-1 - Purging after container failed to start 

After several restart attempts (manual or as a result of re-configuration) I will simply get:

2019-05-31T10:33:46  The application was terminated.

The application then refuses to even attempt to start regardless of whether I use the az cli or the portal.

My current logging configuration is:

{   \applicationLogs\"": {     \""azureBlobStorage\"": {       \""level\"": \""Off\""        \""retentionInDays\"": null        \""sasUrl\"": null     }      \""azureTableStorage\"": {       \""level\"": \""Off\""        \""sasUrl\"": null     }      \""fileSystem\"": {       \""level\"": \""Verbose\""     }   }    \""detailedErrorMessages\"": {     \""enabled\"": true   }    \""failedRequestsTracing\"": {     \""enabled\"": false   }    \""httpLogs\"": {     \""azureBlobStorage\"": {       \""enabled\"": false        \""retentionInDays\"": 2        \""sasUrl\"": null     }      \""fileSystem\"": {       \""enabled\"": true        \""retentionInDays\"": 2        \""retentionInMb\"": 35     }   }    \""id\"": \""/subscriptions/XXX/resourceGroups/XXX/providers/Microsoft.Web/sites/XXX/config/logs\""    \""kind\"": null    \""location\"": \""North Europe\""    \""name\"": \""logs\""    \""resourceGroup\"": \""XXX\""    \""type\"": \""Microsoft.Web/sites/config\"" } 

Further info on the app: - deployed using a docker container - docker base image mcr.microsoft.com/dotnet/framework/aspnet:4.7.2 - image entrypoint c:\\ServiceMonitor.exe w3svc - app developed in ASP.NET 4.7 - using IIS as a web server

Questions:

How can I get some diagnostics on what is going on to enable me to determine why the app is not starting?

Why does the app refuse to even attempt to restart after a few failed attempts?

""",azure-web-app-service +55479062,"Why Azure app service restarts when swapping slots?

After having this issue in production for a long time and having read anything i can find about (such as this or this or that) i made a simple test.

  1. Create an empty asp.net website
  2. in Application_Start send an email or message (i've used PushBullet) to you so you know when the app starts
  3. Create a new app service plan and resource group
  4. Create the website on Azure and publish it
  5. Create a staging deployment slot
  6. Swap staging/production
  7. Publish the website again so both slots have the same version of the website

So i have an empty website no connectionstring no slot setting : \""no

When i click swap I will get notifications that slots restart (at least once each).

Why is this happening ?

UPDATE:

After studying Mohit's answer i need some more clarifications.

  • We send the notification in the Application_Start method which is triggered by the AppInit event if i understand correctly.

  • I do not understand the behavior you explain. The order seems very important to ensure no downtime yet you say it's not necessarily in that order. Why is it required to restart the app domain for the production slot ? Why would users get annoyed that the site is down (it should not) ?

  • What is the \""new swap\"" feature ? What's the difference with the \""old swap\"" ? For my tests i just swapped using the portal.

  • You mention the \""new swap\"" pauses before swap. I suppose it just means it waits for the applicationInitialization to complete (eg HTTP 200 on /) ?

  • I've done some more testing since yesterday. In the Application_Start method i've added some Thread.Sleep to make app startups longer. However when i swap i see no downtime on either staging or production. Shouldn't i experience downtimes on staging at least for the duration of my app startup ? Does this mean the slot that is warmed up then swapped with production is in fact another temporary slot that is neither staging nor prod ?

""",azure-web-app-service +44247261,"How do I apply grayscale to my image with SimpleFilters in C#?

I am creating a function in Azure that takes an image and resizes it + makes it grayscale. I'm currently using this function:

#r \System.Drawing\""  using ImageResizer; using ImageResizer.Plugins.SimpleFilters; using System.Drawing; using System.Drawing.Imaging;  public static void Run(Stream inputImage  string imageName  Stream  resizedImage  TraceWriter log) {   log.Info($\""C# Blob trigger function Processed blob\\n Name:{imageName} \\n    Size: {inputImage.Length} Bytes\"");    var settings = new ImageResizer.ResizeSettings{     MaxWidth = 400      Format = \""png\""   };    // Add the grayscale filter to the image   inputImage.filters.Add(GrayscaleNTSC());    ImageResizer.ImageBuilder.Current.Build(inputImage  resizedImage  settings);  } 

I'm importing the Plugins.SimpleFilters but I don't know how to use it in C#. The project site provides examples in pure HTML.

Do you know how to grayscale the image?

I get the following error: The name 'GrayscaleNTSC' does not exist in the current context

The packages I'm using are:

\""dependencies\"": {   \""ImageResizer\"": \""4.0.5\""    \""ImageResizer.Plugins.SimpleFilters\"": \""4.0.5\"" } 
""",azure-functions +49455319,Duplicate existing running Virtual Machine - Python SDK

Does anyone have experience with the Python SDK (v: v2.0.0rc6) and cloning/duplicating running VM's into another resource group?

getting the OS disk to start. Will need the data disk as well

managed_disk = self.compute_client.disks.get(resource_group_name=source_rg_name  disk_name=vm.storage_profile.os_disk.name) 

make a snapshot of the os disk.

self.compute_client.snapshots.create_or_update(     self.config.resource_group_name      'SNAPSHOT-' + virtual_machine      {         'location': managed_disk.location          'creation_data': {             'create_option': 'Copy'              'source_uri': managed_disk.id         }     } ) 

create the VM. Throws exception below.

result = self.compute_client.virtual_machines.create_or_update(     self.config.cybric_resource_group_name      virtual_machine      azure.mgmt.compute.models.VirtualMachine(         location=vm.location          os_profile=vm.os_profile          hardware_profile=vm.hardware_profile          network_profile=azure.mgmt.compute.models.NetworkProfile(             network_interfaces=[                 azure.mgmt.compute.models.NetworkInterfaceReference(                     id=nic_obj['id']                      primary=True                 )              ]          )          storage_profile=azure.mgmt.compute.models.StorageProfile(         os_disk=azure.mgmt.compute.models.OSDisk(             caching=azure.mgmt.compute.models.CachingTypes.none              create_option=azure.mgmt.compute.models.DiskCreateOptionTypes.attach              name=dup_virtual_machine              managed_disk=azure.mgmt.compute.models.ManagedDiskParameters(                 id=managed_disk.id             )          )          image_reference = azure.mgmt.compute.models.ImageReference(             publisher=vm.storage_profile.image_reference.publisher              offer=vm.storage_profile.image_reference.offer              sku=vm.storage_profile.image_reference.sku              version=vm.storage_profile.image_reference.version          )      )      )  ) 

Exception:

Failed to create virtual machines: Azure Error: InvalidParameter Message: Cannot attach an existing OS disk if the VM is created from a platform or user image. Target: osDisk

,azure-virtual-machine +44318005,CloudTableClient client side timeout

When using a CloudTableClient is there a way to specify a client side timeout?

The TableRequestOptions RetryPolicy and ServerTimeout control the number of retry attempts the delay between attempts and the storage service side timeout but don't seem to cover a client side per-attempt timeout (like the HttpClient.Timeout property).

My concern with relying on the ServerSideTimeout is with delays connecting to the actual server.

,azure-storage +49779302,"Azure Functions returning Error : Keyword not supported: 'metadata'

I have an azure application that consists of an API application Website and Functions (originally webjobs).
All three use the same backend dlls to do all the heavy lifting and database work. The api website and functions are just shells.

My azure functions were working correctly and I updated the backend dlls and republished. Since then I have been getting the \Keyword not supported: 'metadata'.\"" error everytime I try to insert records into the database. Since the functions are nothing but shells for functionality in the API application and the Website I ran the same functionality from them without issue. I even ran it from our integration test project also working just fine.

I know that Azure Functions can occasionally get \""Stuck\"" so I deleted and recreated it but still had the issue. I am using EntityFramework 6.2 My connection string includes the metadata information and I changed the string to properly include the \"" .

example of my connection string (all caps are masked values)

metadata=res://*/Model.MYMODEL.csdl|res://*/Model.MYMODEL.ssdl|res://*/Model.MYMODEL.msl;provider=System.Data.SqlClient;provider connection string=\""Server=tcp:MYDBAZURE.database.windows.net 1433;Database=MYDB;User ID=MYUSERNAME;Password=MYPASSWORD;Encrypt=True;TrustServerCertificate=False;Connection Timeout=30;\"" 

I know that the azure functions rely on some underling dlls and components I can not change so I am thinking this might be the issue.

""",azure-functions +48950264,"Azure Created Image Without Generalizing

So I made the mistake of trying to capture an image of my VM without first running sysprep /generalize on it. Now I have a VM I can't start and an image I can't create a VM from.

Is there any way I can restore my original VM so I can create a valid image from it?

I saw this blog post https://blogs.technet.microsoft.com/shwetanayak/2017/03/19/captured-the-virtual-machine-didnt-intend-to-generalize-it-now-what/ that seems to imply that I can but it's solution says to create a copy of a VHD using a snapshot. When I try to create the snapshot nothing shows up in the Source Disk managed disks drop down.

""",azure-virtual-machine +41243172,"CreateIfnotExists throws a connection error

CreateIfNotExists method throws an error:

No connection could be made because the target machine actively refused it 52.140.168.120:443

when an Application pool identity is used.

// Parse the connection string and return a reference to the storage account.             CloudStorageAccount storageAccount = CloudStorageAccount.Parse(connectionString);              // Create the blob client.             CloudBlobClient blobClient = storageAccount.CreateCloudBlobClient();              CloudBlobContainer container = blobClient.GetContainerReference(\mycontainer\"");              if (container.CreateIfNotExists())             {                 BlobContainerPermissions containerPermissions = new BlobContainerPermissions() { PublicAccess = BlobContainerPublicAccessType.Blob };                 container.SetPermissions(containerPermissions);                 container.CreateIfNotExists();             } 

But this works on using the custom user account and password. Is there a way to get this working using the Application Pool Identity.

""",azure-storage +46164977,"Azure Job BlobTrigger path to have year month day format in the path

I am writing a web job that processes data dumped to the azure storage account in a format
mystorage/data/{yyyy}/{MM}/{dd}/app.csv

What I want to do now in Blobtrigger is the following

  void Foo( BlobTrigger(\ mystorage/data/{yyyy}/{MM}/{dd}/app.csv\"") Stream message  TextWriter log) { } 

Is this possible? I want todays date to be parsed to yyyy MM and dd. Basically the blob should be triggered based on today's date which is part of the path of the file in the blob

""",azure-functions +42127733,"What does the \Storage Account\"" setting of an Azure Function App do?

It has a default selection of \""functionb7be452dbab0\"" in my case but I can change it to select other storage accounts. There is no documentation that I can see which explains the \""storage account\"" setting.

""",azure-functions +53296667,"KeyVault firewall configuration and Azure Functions consumption plan

I have a KeyVault with some secrets in it. I have configured the firewall with a few limited client IPs and also made sure the \Allow trusted Microsoft services to bypass this firewall\"" is set to \""Yes\"".

However when I try connect and retrieve a secret from an Azure function (using Managed Service Identity) I get a 403 Forbidden. If I set the firewall off (i.e. to \""Allow access from all networks\"") then it works fine.

In the (i)nformation box it says that Azure App Services (Web Apps) are supported. I thought this would cover function apps too but obviously not.

I know that I can use a S1 plan and a VNET (and join KeyVault to the same VNET) but then we lose the flexibility of the consumption plan.

I have considered adding the entire Azure IP range for the data centre in question but I don't want the admin overhead.

Any other thoughts on how to secure a KeyVault using a firewall but still be able to access it from a function running on a consumption plan?

\""Supported

""",azure-functions +49224951,"Converting ODataQuery to TableQuery to query Azure Table Storage using OData

I'm using Azure table storage and I'd like to be able to query using OData. I've come across the Microsoft.Rest.Azure.OData.ODataQuery class but I can't find any examples of how this is consumed.

The CloudTable object permits queries through a TableQuery<T> object so is there any way to convert an ODataQuery to a TableQuery?

I know that earlier versions of Azure's table storage used an underlying OData API but I don't know if this is still the case and I haven't found any documentation detailing whether the table can be exposed through OData.

Can anyone explain how to query Azure table storage using OData - ideally through a library?

EDIT:
For clarity I know that table storage exposes a REST API which accepts OData queries; what I'm looking for is a way to pass the OData query programmatically: If I have an ODataQuery object how can I use this to query a CloudTable object?

""",azure-storage +41433211,"TFS 2017 HTTPS Binding loses console permissions?

I've been trying all day to set up my instance of TFS2017 to work with HTTPS.

I've read the official setup guide but it didn't help much.

My instance is attached to a domain and configuration has been made with an Administrators group user. The domain account is referenced as an administration console user properly. The setup has been made with default 8080 port and domain account user can access the website as expected (hosted at http://machine-name:8080/tfs)

Now when I change the IIS website settings binding to use HTTPS on port 443 with a valid wildchar certificate + set the hostname to be tfs.mydomain.com + ask for SSL require I cannot have my user to authenticate anymore. I make TFS Public Url point to https://tfs.mydomain.com/tfs.

I get prompted for the authentication box but after many attempts the site would just fail with 401.

The tests are made into the server environment to avoid Firewall confusions.

My instance has two network cards with 2 separate networks. First resolves to public IP second resolves to private IP. I noticed the configuration works with the machine names while it fails with the DNS resolution on the public IP. Could this be a reason ?

Thanks for your help

""",azure-devops +15787768,Accessing Azure Cloud Service PaaS Instances via PowerShell remoting

I have an Azure Web Role with 2 Instances (NB the PaaS Roles *not Azure Virtual Machines). I can connect to them via Remote Desktop but I don't know how to do Remoting in Powershell (PowerShell Remoting) because unlike with a Azure Virtual Machine Cloud Service there is no way to define an Endpoint and Port for each instance as there are not separate addresses for each Worker Role.

How can I connect to an individual PaaS Worker Role Instance via Powershell Remoting ? IOW how can I use:

Enter-PSSession –ComputerName PC1 –Credential User 

against a Cloud Service Worker Role (PaaS) Instance?

,azure-virtual-machine +55627936,"Kusto language. Get one value only if the previous value in time is not the same

Context to my very vague title: I have 4 virtual machines that send their logs to application insights. I retrieve the logs and transform this in a table with kusto language.

Table of outcome \""enter

Query:  AzureActivity | where ResourceProvider == \""Microsoft.Compute\"" and ActivityStatus == \""Succeeded\"" and OperationName == \""Deallocate Virtual Machine\"" | project DeallocateResource=Resource  DeallocatedDate=format_datetime(EventSubmissionTimestamp  'yyyy-MM-dd')  DeallocatedTime=format_datetime(EventSubmissionTimestamp  'HH:mm:ss') | join kind=fullouter ( AzureActivity | where ResourceProvider == \""Microsoft.Compute\"" and ActivityStatus == \""Succeeded\"" and OperationName == \""Start Virtual Machine\"" | project StartupResource=Resource  StartDate=format_datetime(EventSubmissionTimestamp  'yyyy-MM-dd')  StartTime=format_datetime(EventSubmissionTimestamp  'HH:mm:ss') ) on $right.StartupResource == $left.DeallocateResource | where StartDate == DeallocatedDate | project Resource=coalesce(StartupResource  DeallocateResource)   Date=format_datetime(todatetime(coalesce(StartDate  DeallocatedDate))  'dd/MM/yyyy' )     StartTime= StartTime  StopTime=DeallocatedTime    Runtime_Hours = format_datetime(datetime_add('minute' datetime_diff('minute'  todatetime(strcat(StartDate   \"" \""   DeallocatedTime ))   todatetime(strcat(StartDate   \"" \""   StartTime )))  make_datetime(2017 1 1))  'hh:mm')  | sort by Date asc   Resource asc 

As you can see the runtime is not correct when a VM is started at 8:15 and stopped at 8:58 and have runtimes of 12:43 hours then there is something wrong. In the activity log of the VM I see that some colleague did some strange thing with the VM. And started it a couple of times (a minute after he started it again probably a glitch when you click twice on the start button at the same time).

Activity Logs \""enter

I did find a theoretical solution to my problem: My query needs to change so that the runtime and even the start and stop times need to get logged in the time table only when the VM starts and followed by a stop. But atm I got all the \""Start Virtual Machines\"" and all the \""Stop Virtual Machines\"" and just order them in a table which causes the mix up in my result table.

But I can't seem to find the way to adjust this in my query. To say Get the start virtual machine only when it's the first of the day (when the previous is not start virtual machine) or the previous log is \""deallocate virtual machine\"" because this is not by order start-stop. the time of the day needs to be in the formula. Get deallocate virtual machine only when the previous is a start virtual machine. and calculate the runtime of each run not for each day.

As I am very new to SQL and kusto and I am not here for someone to hand me the solution or do the work for me. I was hoping if there is someone who can help me out or guide me in the right direction to find a solution to my problem.

Thanks in advance !!!

""",azure-virtual-machine +30921734,"Installing Azure powershell in an azure Virtual Machine

I need to write a powershell workflow that creates an Azure Virtual Machine and executes some azure cmdlets in that Azure Virtual Machine. But the newly created VM has no azure powershell module installed in it. My code would be like this

    New-AzureQuickVM -Windows -ServiceName $serviceName -Name $vmname -ImageName $VMImage  -Password $password -AdminUserName $username -InstanceSize \ExtraSmall\"" -WaitForBoot      $WinRmUri = Get-AzureWinRMUri -ServiceName $serviceName -Name $vmname     $Cred = new-object -typename System.Management.Automation.PSCredential -argumentlist $username  $password      Invoke-Command -ConnectionUri $WinRmUri -Credential $Cred -ScriptBlock {          Add-AzureAccount ......  ## These cmdlets need Azure Powershell Module           Set-AzureSubscription........          New-AzureStorageAccount......     } 

I am not supposed to manually get rdp of that VM and open it to install Azure Powershell Module but to dynamically create a VM using powershell cmdlet and install azure module in that vm using powershell itself.

""",azure-virtual-machine +44975744,"Publishing Powershell Modules to VSTS Package Management using Publish-Module

I am trying to publish my Powershell modules to a VSTS Package Management feed. So far I have:

$securePass = ConvertTo-SecureString -String $RepositoryPassword -AsPlainText -Force $cred = New-Object System.Management.Automation.PSCredential ($RepositoryUsername  $securePass)   Write-Debug \Adding the Repository $RepositoryName\"" Register-PSRepository -Name $RepositoryName -SourceLocation $RepositorySourceUri `                             -PublishLocation $RepositoryPublishUri -Credential $cred `                           -PackageManagementProvider Nuget -InstallationPolicy Trusted  $PublishParams = @{     Path = $ModuleFolderPath     ProjectUri = $ProjectUri     Tags = $ModuleTags     Repository = $RepositoryName     NugetApiKey = $NugetApiKey }  Publish-Module @PublishParams -Force -Verbose 

However I get the following error:

Publish-PSArtifactUtility : Failed to publish module 'Framework.Logging': 'Publishing to a ******** package management feed 'https://xxx.pkgs.visualstudio.com/_packaging/PowershellModules/nuget/v2' requires it to be registered as a NuGet package source. Retry after adding this source
'https://xxx.pkgs.visualstudio.com/_packaging/PowershellModules/nuget/v2' as NuGet package source by following the instructions specified at 'https://go.microsoft.com/fwlink/?LinkID=698608''. At C:\\Program Files\\WindowsPowerShell\\Modules\\PowerShellGet\\1.1.2.0\\PSModule.psm1:1227 char:17 + Publish-PSArtifactUtility -PSModuleInfo $moduleInfo ` + ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + CategoryInfo : InvalidOperation: (:) [Write-Error] WriteErrorException + FullyQualifiedErrorId : FailedToPublishTheModule Publish-PSArtifactUtility

The PSRepository is passed https://xxx.pkgs.visualstudio.com/_packaging/PowershellModules/nuget/v2 as both the source and publish Uris when it is created. Any pointers as to where I am going wrong?

""",azure-devops +51767654,"Azure Storage Account requests getting throttled despite not reaching limits

I'm running into the below exception intermittently when running the Set-AzureRmStorageAccount cmdlet:

Set-AzureRmStorageAccount : The request is being throttled. At D:\\StorageAccount.psm1:36 char:4 +             Set-AzureRmStorageAccount -ResourceGroupName $Resource.Re ... +             ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~     + CategoryInfo          : CloseError: (:) [Set-AzureRmStorageAccount]  CloudExce     ption     + FullyQualifiedErrorId : Microsoft.Azure.Commands.Management.Storage.SetAzureSt     orageAccountCommand 

The cmdlet that I'm running is

Set-AzureRmStorageAccount -ResourceGroupName \RG\"" -Name \""SA\"" -EnableHttpsTrafficOnly $true 

So in order to repro this issue I put the above cmdlet in a while loop and executed it.

However I immediately started seeing the throttling error again and when I checked the portal for total transactions it is not even close to the limits specified in the documentation:

    The following limits apply when performing management operations using the Azure Resource Manager only.      Resource                                     Default Limit    Storage account management operations (read)    800 per 5 minutes     Storage account management operations (write)   200 per hour      Storage account management operations (list)    100 per 5 minutes 

The screenshot from the azure portal shows only around 75 requests as shown below

\""Screenshot

Can someone help me understand why I'm seeing this throttling error so soon and if there is a way to see the source of requests to the storage account?

Thanks!

""",azure-storage +32582624,Azure WebJob: monitor all containers in account

I am developing an azure webjob which is monitoring a blob storage account for new inserted blobs. My storage account consists of multiple containers all holding similar information. Currently I'm using separate BlobTriggers for every container to monitor the single containers.

Is there a way to monitor the whole account for new blobs instead of every single container? If not can I automatically iterate over the containers in a storage account and call the webjob with the container names as parameter?

,azure-storage +50796593,"How do I convert readable image stream into base64 without saving locally

I want to convert an image in azure to base64. How can I achieve this using azure-storage package?

this.blobService.getBlobProperties(                 'container'                  path                  (err  properties  status)=> {                     if (err) {                          res.send(502  \Error fetching file: %s\""  err.message);                     } else if (!status.isSuccessful) {                         res.send(502  \""The file %s does not exist\""  fileName)                     } else {                          res.header('Content-Type'  properties['contentType']);                          this.blobService.createReadStream('container'  path (error response)=>{                          }).pipe(res);                     }                 }); 

The response I get is like this I want to convert this(octet/stream) to base64.

\""enter

""",azure-storage +54511247,Transfer data from azure blob storage to hdfs file system

I have data in azure storage blob which is in parquet format. What I need to do is to transfer all those storage files to a hdfs. Is there any way I can do that?

couldn't find any helpful method to do it

Thanks.

,azure-storage +49267438,"How to accept connections from local network on Azure Functions (v2)?

I am trying to develop an @AzureFunctions using Xamarin Forms. But it is not accepting connections from the cell phone.

How to configure host port of Azure Functions on Visual Studio 2017 to enable connections from * other than localhost connections?

How to accept connections from local network on Azure Functions (v2) ?

How to configure Host : Port of Azure Functions on Visual Studio 2017 ?

I want that it accepts like ASP.Net Core (.UseUrls(\http://+:7071\"")):

Now listening on: http://[::]:7071 

but it is only listening on

http://localhost:7071 

\""Azure

GitHub Issue: How to configure Azure Functions (v2) listening Host / Domain ? https://github.com/Azure/azure-functions-core-tools/issues/537

""",azure-functions +43798349,IP Forwarding With Azure - After Decommissioning VM

I have a Linux VM on Azure. This VM has an External IP let's call it VMIP.

I need to decommission the VM and move the traffic to a new External IP.

We have moved most of the traffic via a DNS CNAME chain but some clients have an A-Record direct to VMIP.

How can I:

(a) Save the VMIP as static

(b) Decommission the VM

(c) Set up forwarding of all Port 80 traffic on VMIP to an External IP

,azure-virtual-machine +50035322,"Building asp.net MVC application in VSTS

I have been using Visual Studio Online for my MVC application for a while now but I have only been using it mainly as a way to manage my work cloud storage and version control in case I need rollback something that I made a mistake on.

It has gotten to the point in time where I need to start managing my releases properly rather than just managing it in a folder structure. (I know I am fairly unprofessional).

So I am trying to use CI in VSTS but all of my builds are failing. It seems that I am missing all of my NuGet packages. Here is the log from my NuGet restore

https://hastebin.com/ufibohoqir.tex

I have read up a bit on a nuget.config file which I don't have. I have tried to research into this but I am fairly lost. Do I need this file? I don't use any other packages except for nuget.

Any help would be appreciated. I use VS2015 and I can build using it. I have no idea why it can not find the nuget references.

Thanks!

EDIT

Here is the Log of the build that failed. https://file.io/cRydzZ

It was too big to put the whole thing on Hastebin. Bu here is a snippet of the log of when it started to break.

https://hastebin.com/ubofozirop.vbs

EDIT 2

After changing my Agent Queue to Hosted as was suggested the NuGet packages all seem to be restored successfully. The build is still failing though. Here is my .csproj file: https://hastebin.com/iravicayek.xml

One of the things that I have noticed is that the packages that are not found when building are the ones that look like this in the .csproj file:

<Reference Include=\""Antlr3.Runtime  Version=3.5.0.2  Culture=neutral  PublicKeyToken=eb42632606e9261f  processorArchitecture=MSIL\"">   <HintPath>..\\packages\\Antlr.3.5.0.2\\lib\\Antlr3.Runtime.dll</HintPath>   <Private>True</Private> </Reference> 

All of the ones that don't have HintPath and Private elements as children seem to load. I tested to see if I removed the children from the Reference elements but they still failed to build.

""",azure-devops +48158825,"Visual Studio 2017 converting identity error

I just upgraded my VS 2017 to version 15.5.2 but get the following error dialog every time when open a VSTS project:

Error converting value \Microsoft.IdentityModel.Claims.ClaimsIdentity;xxxxxxxxxx\\xxx@microsoft.com\"" to type 'Microsoft.VisualStudio.Services.Identity.IdentityDescriptor' Path 'authenticatedUser.descriptor' line 1 position 184.

What can I do to resolve this issue?

""",azure-devops +56306821,"Get the data of Build.Repository.LocalPath and used it in my DockerFile

I want to get the data from the variable Build.Repository.LocalPath and use it in my Dockerfile but it shows me and error.

This is my dockerfile:

FROM microsoft/aspnet:latest  COPY \/${Build.Repository.LocalPath}/NH.Services.WebApi/bin/Release/Publish/\"" /inetpub/wwwroot 

I get this error:

Step 2/9 : COPY \""/${Build.Repository.LocalPath}/NH.Services.WebApi/bin/Release/Publish/\"" /inetpub/wwwroot failed to process \""\\\""/${Build.Repository.LocalPath}/NH.Services.WebApi/bin/Release/Publish/\\\""\"": missing ':' in substitution ##[error]C:\\Program Files\\Docker\\docker.exe failed with return code: 1 

I have try a lot of ways putting this line:

COPY \""/${Build.Repository.LocalPath}/NH.Services.WebApi/bin/Release/Publish/\"" /inetpub/wwwroot 
""",azure-devops +51122890,"How to manage Azure AD with service user and token instead of certificate?

How can I access and manage AAD groups via powershell when I only have an App-User and a token?

I don't like to manage the certificates especially since I want to use Azure functions to modify the AD.

Descriptions from here are great but they always use the cert to connect to the AD.

Is there a alternative that i could use to connect to the AAD and manage it automatically from powershell?

""",azure-functions +53761251,"Maximum Text Size For createBlockBlobFromText method Azure Storage NodeJS

createBlockBlobFromText(container: string blob: string text: string | Buffer options: CreateBlobRequestOptions callback: ErrorOrResult<BlobResult>)

it is a method for writing text as block blob.

I want to know what is the limit of the text size.

In Docs it is said that \""You create or modify a block blob by writing a set of blocks and committing them by their block IDs. Each block can be a different size up to a maximum of 100 MB and a block blob can include up to 50 000 blocks. The maximum size of a block blob is therefore slightly more than 4.75 TB (100 MB X 50 000 blocks\""

Does this method saves as a single block meaning only 100MB? Or can I use more memory for the single blob.

""",azure-storage +47046670,OMS extension or Windows Diagnostics extension

Can the windows diagnostic do the same as the OMS extension in terms of getting performance counter information and event details? Is there a reason to use the OMS extension over WAD for event/performance information?

,azure-virtual-machine +39293067,"Newly created Azure Web App with Overview and Activity Log not found?

I created a brand new web app on Azure. However on navigating to the web app's page the blades (panels) showcasing the service's overview and activity log (titled \Overview\"" and \""Activity Log\"" respectively) is unavailable.

A simple error page indicating it is not found is displayed instead. I have attempted to tinker with the two suggestions on the page regarding publish profiles (get and reset options) but to no avail.

I doubt there is an issue with my deployment method as I chose all the default values during setup. What could be the issue?

Here is an image of the same. Please note I am still unable to post images in the questions. Appreciate the help!

""",azure-web-app-service +44013645,"Creating a table in Azure Table fails without exception

I'm trying to create a table in Windows Azure Table service using the example https://github.com/Azure-Samples/storage-table-dotnet-getting-started

While running this example I have no problem : the table creates itself insert and delete work fine.

But what I copied as code into my project can't seem to work properly. Table creation apparently fails but returns no exception whatsoever. It's like my code does not wait for the table creation to finish.

Can any understand why ?

Thanks for your help !

Here is how I call the code :

public async void saveInAzureTable() {             AzureStorage azure = new AzureStorage();             CloudTable table = await     AzureStorage.CreateTableAsync(\""randomtable123\""); } 

And here's the AzureStorage class :

    using System;     using System.Threading.Tasks;     using Microsoft.Azure;     using Microsoft.WindowsAzure.Storage;     using Microsoft.Wi`enter code here`ndowsAzure.Storage.Table;     using Scraper.Models;      namespace Scraper     {         public class AzureStorage         {             public async Task<TableResult> storeAdvertisementInAzure(CloudTable table  AdvertisementEntity entity)         {             TableOperation insertOrMergeOperation = TableOperation.InsertOrMerge(entity);             TableResult result = await table.ExecuteAsync(insertOrMergeOperation);             return result;         }          /// <summary>         /// Validate the connection string information in app.config and throws an exception if it looks like          /// the user hasn't updated this to valid values.          /// </summary>         /// <param name=\""storageConnectionString\"">Connection string for the storage service or the emulator</param>         /// <returns>CloudStorageAccount object</returns>         public static CloudStorageAccount CreateStorageAccountFromConnectionString(string storageConnectionString)         {             CloudStorageAccount storageAccount;             try             {                 storageAccount = CloudStorageAccount.Parse(storageConnectionString);             }             catch (FormatException)             {                 Console.WriteLine(\""Invalid storage account information provided. Please confirm the AccountName and AccountKey are valid in the app.config file - then restart the application.\"");                 throw;             }             catch (ArgumentException)             {                 Console.WriteLine(\""Invalid storage account information provided. Please confirm the AccountName and AccountKey are valid in the app.config file - then restart the sample.\"");                 Console.ReadLine();                 throw;             }              return storageAccount;         }           /// <summary>         /// Create a table for the sample application to process messages in.          /// </summary>         /// <returns>A CloudTable object</returns>         public static async Task<CloudTable> CreateTableAsync(string tableName)         {             // Retrieve storage account information from connection string.             CloudStorageAccount storageAccount = CreateStorageAccountFromConnectionString(CloudConfigurationManager.GetSetting(\""StorageConnectionString\""));              // Create a table client for interacting with the table service             CloudTableClient tableClient = storageAccount.CreateCloudTableClient();              Console.WriteLine(\""Create a Table to store data from the scraper\"");              // Create a table client for interacting with the table service              CloudTable table = tableClient.GetTableReference(tableName);             try             {                 if (await table.CreateIfNotExistsAsync())                 {                     Console.WriteLine(\""Created Table named: {0}\""  tableName);                 }                 else                 {                     Console.WriteLine(\""Table {0} already exists\""  tableName);                 }             }             catch (StorageException)             {                 Console.WriteLine(\""If you are running with the default configuration please make sure you have started the storage emulator. Press the Windows key and type Azure Storage to select and run it from the list of applications - then restart the sample.\"");                 Console.ReadLine();                 throw;             }              Console.WriteLine();             return table;         }      } } 
""",azure-storage +49374770,error while trying to push nuget package from VSTS

I added dotnet task (.Net Core) to do a nuget push. In the Nuget server section it asked me to use create a new Nuget Connection. I went with the API Key option and game in connection name Feed URL and API Key.

when I run this step I get the following error

Error: DotNetCore currently does not support using an encrypted Api Key.

is this a limitation or am i doing something wrong?

Please note from my desktop I am about to create package and push the package using apikey.

,azure-devops +12309767,How to determine whether my Azure connection string is valid

A bad Azure connection string will hang my application indefinitely the first time I contact azure; in my case during blobContainer.CreateIfNotExist();.

Other SO posts about connectivity checking mentioned setting timeouts but it still hangs indefinitely with a 2s timeout: blobContainer.CreateIfNotExist(new BlobRequestOptions() { Timeout = new TimeSpan(0 0 2) });

What's the correct way to check if an Azure connection string is valid?

,azure-storage +42198043,When Upload/download files in window azure vm

I am using dedicated VM for my web application also use load balance to handle traffic.my query is when I upload updated files Will these files accessible to all instance of load balance at once? secondly if users want to download files from application will they download from actual VM or any instance of load balance will help user to download these files whatever the instance be user hits.

,azure-virtual-machine +51469930,CI_VSTS publish different web Artifact for each project in single solution

I have a CI pipeline(VSTS) in which I am able to build whole solution which has two website projects in it in a single artifact. What I want to do it build whole solution and then create publish artifact for each project. e.g. One artifact for Project_website1 One artifact for Project_website2 thanks. I tried similar topic in StackOverflow but didnt work for me

,azure-devops +49328159,"Azure App Service ApiApp

I am trying to create an API App in Azure App Service with PowerShell.

The cmdlet I am calling always create a Web App by default. If it is possible I would like to know how I can specify the type/kind to be Api App instead of Web App?

New-AzureRmWebApp -Name $name -Location $location -AppServicePlan $plan -ResourceGroupName $resourceGroup 

From my reading there is not much different between both except the icon is it worth it to set the type to \Api App\"" if it's what my app is all about?

I am using version 5.4.0 of AzureRM PowerShell module.

> Get-Module \""AzureRM\""  ModuleType Version    Name                                                  ---------- -------    ---- Script     5.4.0      AzureRM 
""",azure-web-app-service +41275438,"Deploying to Azure from git: 'No Deployable Projects'

According to the documentation it is possible to deploy to Azure by updating a git repository.

I have attempted the walkthrough here.

I created this github repository then generated an ASP.NET MVC project from the Visual Studio template.

Looking at the logs Azure detected the checkin but provided this unhelpful message:

Using the following command to generate deployment script: 'azure site deploymentscript -y --no-dot-deployment -r \""D:\\home\\site\\repository\"" -o \""D:\\home\\site\\deployments\\tools\"" --basic'.

Generating deployment script for Web Site

Generated deployment script files

Found solution 'D:\\home\\site\\repository\\kudu-deployment-test.sln' with no deployable projects. Deploying files instead.

Why is my straight-out-of-the-box ASP.NET project not a 'deployable project'?

What can I do to fix it so it is?

""",azure-web-app-service +46703350,adding azure ad b2c authentication to azure function spa

I am trying to build a Single Page App with Azure Functions so that when user wants to visit my website they can visit the url of my azure function which will be a custom domain like www.contoso.com

But when they visit it first they must automatically go to login page for Azure AD B2C and after they login they get redirected to my SPA with their info.

I know how to create a SPA with azure functions without authentication and I also know how to configure an azure b2c tenant I've also added azure ad b2c authentication into Azure Function -> Authentication -> Azure AD -> Advanced.

My question is how can I initiate the login process for the user just like in a normal website. In a normal asp website visual studio provides options to integrate this but how can I do the same for azure functions?

,azure-functions +49785180,"Unable to load Entity Framework Core into C# azure function

I already opened an issue in Azure/azure-functions-host and I have a repo with the repro steps but I'm posting this here in case there is something inherently wrong with what I'm doing or someone has already stumbled open this issue.

My goal: I want to run in an azure function some code that lives in a class library in an existing visual studio solution.

This code happens to use entity framework core in order to read and write to a SQL Server database.

While trying to isolate the issues I was facing I ended up with the following scenario:

Repro steps:

  1. In Visual Studio: File > New > Project > Azure Functions

  2. Select Azure Functions v2 Preview (.NET Core)

  3. Select Http trigger

  4. Select Storage Emulator

  5. Select Access rights Function

  6. Install using nuget Microsoft.EntityFrameworkCore version 2.0.2

  7. Add an invocation to anything from the EFCore package

    In my case I added the following line:

    log.Info(typeof(DbContext).AssemblyQualifiedName); 
  8. Ensure the azure storage emulator is running

  9. Run the function from visual studio (F5)

  10. Hit the url printed in the console

Expected behavior: along with the default behavior of the example Http trigger function I expect to see the following line printed with each invocation:

Microsoft.EntityFrameworkCore.DbContext  Microsoft.EntityFrameworkCore  Version=2.0.2.0  Culture=neutral  PublicKeyToken=adb9793829ddae60 

Actual behavior: the app throws an exception at runtime and outputs the following

[11-Apr-18 6:33:59 AM] Executing 'Function1' (Reason='This function was programmatically called via the host APIs.' Id=6faabfd8-eb96-4d71-906c-940028a7978a)
[11-Apr-18 6:33:59 AM] Executed 'Function1' (Failed Id=6faabfd8-eb96-4d71-906c-940028a7978a)
[11-Apr-18 6:33:59 AM] System.Private.CoreLib: Exception while executing function: Function1. FunctionApp1: Could not load file or assembly 'Microsoft.EntityFrameworkCore Version=2.0.2.0 Culture=neutral PublicKeyToken=adb9793829ddae60'. Could not find or load a specific file. (Exception from HRESULT: 0x80131621). System.Private.CoreLib: Could not load file or assembly 'Microsoft.EntityFrameworkCore Version=2.0.2.0 Culture=neutral PublicKeyToken=adb9793829ddae60'.

Current conjecture: while researching this issue I found something that might be related:

The Azure functions runtime already has a set of packages available one of those being Newtonsoft.Json in a specific version. In the case a newer version of Newtonsoft.Json is referenced from the project a similar behavior is observed.

Here's a StackOverflow question. Here's a github issue

""",azure-functions +39523396,"Can an Azure Function be triggered by the creation of a DocumentDB document?

If I have another Azure Function creating documents based on some other event (e.g. API call).

Is there support (or will there be) to have an Azure Function run based on a new document being created?

using System; public static void Run(object doc  TraceWriter log) {     log.Info($\""doc based trigger? ... {doc}\""); } 

Binding I tried to use i tried it with and wihout the \""id\"" property and type documentDB and documentDBTrigger :

\""bindings\"": [ {   \""type\"": \""documentDB\""    \""name\"": \""doc\""    \""databaseName\"": \""MyDb\""    \""collectionName\"": \""MyCollection\""    \""connection\"": \""mydb_DOCUMENTDB\""    \""direction\"": \""in\"" } 
""",azure-functions +31869830,Google Docs - with some different storage

Is it possible to use Google Docs with some different storage other than Google Drive.

We have a .net application that is using some document (docx) files stored on Azure. These files will be edited simultaneously by multiple people. We want to use Google Docs - collaboration feature. Using Google APIs we can load the files and can control the document sharing to some restricted people. But we cannot upload the docx files on Google Drive. Is it possible to do so and how?

OR

Is there any other online Document Collaboration tool that allows

  1. to upload the azure documents
  2. provides APIs to integrate the tool with existing .net application.

Thanks.

,azure-storage +55012289,"Azure endpoint reached but calls to API returning 404 error

We have set up an app service project in the Azure Portal and then went through deployment of the project using Visual Studio DevOps. When I go to http://MyAzureSite.azurewebsites.net (Made up URL here) I can confirm that the service is up and running.

But when I add \""api/ControllerName/getStatus\"" I get a 404 error.

Call from my local machine is working perfectly fine.

http://localhost:52686/api/status/getStatus 

But not:

http://MyAzureSite.azurewebsites.net/api/status/getstatus 

Signature for the GetStatus looks good:

[HttpGet] public List<Status> GetStatus() 
""",azure-web-app-service +51136636,"Configuring Azure Application Gateway to Azure web app to route requests by path

I have two web apps (webapp1 and webapp2). I would like to use Application Gateway features where can route using path based redirect. http://mywebsite/login1 redirect to webapp1 http://mywebsite/login2 redirect to webapp2

Is this possible it possible to do this with Application gateway if so can you please give link or direction on how to do this for web apps prespective

""",azure-web-app-service +56352264,"binding to input blob imperatively

How do we imperatively bind to an input blob?

I'd like to be able to read blobs inside of my azure function. One way to do this is to add a parameter like this one:

[Blob(\%MyInputBlob%/{FileName}\""  FileAccess.Read)]  Stream input 

However this won't work for me because I will need to read multiple blobs and they have different {filenames}.

I understand there's an imperative binding solution to write to an output like so:

        var attributes = new Attribute[]         {                 new BlobAttribute(path)                  new StorageAccountAttribute(connection)         };         using (var writer = await binder.BindAsync<TextWriter>(attributes))         {             writer.Write(payload);         } 

Is there a similar binding capability for INPUT blobs?

""",azure-functions +44910406,"How to read from Azure Table Storage with a HTTP Trigger PowerShell Azure Function?

The Row Key will be passed in the query string. What is needed in the function to create the \connection string\"" to the Table Storage?

""",azure-functions +43788697,"Azure App Service Linux hosting Wordpress error \Installing WordPress ... This could be done in minutes. Please refresh your browser later.\""

I am hosting my blog on Azure App Service platform on the new linux host.It was working fine but now the website cannot be accessed and it is giving error \""Installing WordPress ... This could be done in minutes. Please refresh your browser later.\"" This stays like that for for ever.

I checked the health and everything is fine.I tried to enable php loggiong by editing the wp-config.php by adding these two lines

define('WP_DEBUG'  true); define('WP_DEBUG_LOG'  true); 

But I do not see the the log file generated.Accessing the phpmyadmin gives error

\""No route registered for '/phpmyadmin/'\"" 

This msdn blog says that I might have to upgrade my docker image.I have some blog already created I would like to create a export the DB before doing anything.

I have also added the .user.ini file with log_errors=on but do not see any errors logged.

edit:

Here's the error from docket_XX_err.log file:

2017-04-21T03:06:06.663993010Z AH00558: httpd: Could not reliably determine the server's fully qualified domain name  using 172.17.0.3. Set the 'ServerName' directive globally to suppress this message 2017-04-21T03:06:14.616897475Z ERROR 1102 (42000) at line 1: Incorrect database name '' 2017-04-21T04:06:56.519319746Z AH00558: httpd: Could not reliably determine the server's fully qualified domain name  using 172.17.0.3. Set the 'ServerName' directive globally to suppress this message 2017-04-21T04:07:00.414460896Z ERROR 1102 (42000) at line 1: Incorrect database name '' 

** edit 2**

updating the docker image to appsvc/apps:wordpress:0.1 shown below did not fix the problem.

\""dockerimage

The error in the docker_XX log says

Digest: sha256:ca50223ff969665a64ed3b690124f56d1cc51754331e94baa80327dcc474c020 Status: Image is up to date for appsvc/apps:wordpress wordpress: Pulling from appsvc/apps Digest: sha256:ca50223ff969665a64ed3b690124f56d1cc51754331e94baa80327dcc474c020 Status: Image is up to date for appsvc/apps:wordpress wordpress: Pulling from appsvc/apps 

After updating the image am still not able to access phpmyadmin to export my data.

""",azure-web-app-service +47248572,"Any reason to use Azure Web Apps instead of Azure Web Apps for Containers?

I'm pretty new to Azure so for the sake of learning I have deployed Node.js applications in Azure both as Docker containers and Azure web apps on Linux. Since Azure web apps are containers anyway is there any good reason why I should use them instead of my own containers which I have better control over?

One problem I stumbled upon was that you have to take quite a few things into account with the preconfigured containers in Azure web apps some of that described here. If I instead use my own Docker containers I don't have to take the extra steps that are sometimes required to get your Node.js application with its dependencies up and running as an Azure web app.

Am I missing something or is it as it now seems to me less work to deploy my apps in Azure as Docker containers?

Sebastian

""",azure-web-app-service +53739799,"How do I find the network interface ID associated with an Azure VM

I can find the VM by using

$vm = Get-AzureRmVM -ResourceGroupName \MyResource\"" -Name \""MyVM\"" 

But how can I find the network interface associated with this VM?

""",azure-virtual-machine +32867493,Blocking Outbound Internet connection on Azure VM but keeping Antimalware Extension working

I have a VM on which I blocked outbound connectivity to the Internet using a NetworkSecurityGroup.

I installed the Antimalware Extension and realized that the installation would not work without connecting to the Internet. So I removed the NetworkSecurityGroup to reactivate outbound connectivity and everything installed correctly.

Now I want put the NetworkSecurityGroup restrictions back (No internet access).

What exclusions should I add to the NetworkSecurityGroup so that the Antimalware Extension continue working and update normally?

,azure-virtual-machine +46368663,"VSTS Nuget Restore Fails with non compatible assembly error

I am trying to setup a VSTS build definition for a Service Fabric project and I can't get the build to get further than the 'Build' step.

Currently the project structure looks like this:

- Application   - Service Fabric Project 1 (Web API)   - Service Fabric Project 2 (Stateful Service)   - Application Project Folder   - Angular Project 

I am just trying to build the Web API Service Fabric Project. I have followed this guide and used the 'Azure Service Fabric Application' template and my build definition looks like this: \""BuildDefinition\""

And the error I get is:

C:\\Program Files\\dotnet\\sdk\\2.0.0\\Sdks\\Microsoft.NET.Sdk\\build\\Microsoft.PackageDependencyResolution.targets(323 5): Error : Assets file 'd:\\a\\3\\s\\ApplicationName.Security.Gateway\\obj\\project.assets.json' not found. Run a NuGet package restore to generate this file. C:\\Program Files\\dotnet\\sdk\\2.0.0\\Sdks\\Microsoft.NET.Sdk\\build\\Microsoft.PackageDependencyResolution.targets(323 5): error : Assets file 'd:\\a\\3\\s\\ApplicationName.Security.Gateway\\obj\\project.assets.json' not found. Run a NuGet package restore to generate this file. [d:\\a\\3\\s\\ApplicationName.Security.Gateway\\ApplicationName.Security.Gateway.csproj] Build continuing because \""ContinueOnError\"" on the task \""ReportAssetsLogMessages\"" is set to \""ErrorAndContinue\"". C:\\Program Files\\dotnet\\sdk\\2.0.0\\Sdks\\Microsoft.NET.Sdk\\build\\Microsoft.PackageDependencyResolution.targets(165 5): Error : Assets file 'd:\\a\\3\\s\\ApplicationName.Security.Gateway\\obj\\project.assets.json' not found. Run a NuGet package restore to generate this file.

I downloaded the logs and also found this error during the Nuget Restore process:

2017-09-22T15:35:53.8340398Z d:\\a\\3\\s\\Application.Application\\Application.Application.sfproj(57 5): error : Unable to find the '..\\packages\\Microsoft.VisualStudio.Azure.Fabric.MSBuild.1.6.1\\build\\Microsoft.VisualStudio.Azure.Fabric.Application.props' file. Please restore the 'Microsoft.VisualStudio.Azure.Fabric.MSBuild' Nuget package 2017-09-22T15:35:53.8340398Z d:\\a\\3\\s\\Application.Application\\Application.Application.sfproj : warning NU1503: Skipping restore for project 'd:\\a\\3\\s\\Application.Application\\Application.Application.sfproj'. The project file may be invalid or missing targets required for restore. [d:\\a_temp\\NuGetScratch\\temmko3j.dto.nugetinputs.targets] 2017-09-22T15:35:53.8340398Z d:\\a_temp\\NuGetScratch\\tspr1daf.vdl.nugetrestore.targets(131 5): error MSB4018: The \""WriteRestoreGraphTask\"" task failed unexpectedly. [d:\\a_temp\\NuGetScratch\\temmko3j.dto.nugetinputs.targets] 2017-09-22T15:35:53.8340398Z d:\\a_temp\\NuGetScratch\\tspr1daf.vdl.nugetrestore.targets(131 5): error MSB4018: NuGet.Commands.RestoreCommandException: PackageTargetFallback and AssetTargetFallback cannot be used together. Remove PackageTargetFallback(deprecated) references from the project environment. [d:\\a_temp\\NuGetScratch\\temmko3j.dto.nugetinputs.targets] 2017-09-22T15:35:53.8340398Z d:\\a_temp\\NuGetScratch\\tspr1daf.vdl.nugetrestore.targets(131 5): error MSB4018: at NuGet.Commands.AssetTargetFallbackUtility.EnsureValidFallback(IEnumerable1 packageTargetFallback IEnumerable1 assetTargetFallback String filePath) [d:\\a_temp\\NuGetScratch\\temmko3j.dto.nugetinputs.targets] 2017-09-22T15:35:53.8340398Z d:\\a_temp\\NuGetScratch\\tspr1daf.vdl.nugetrestore.targets(131 5): error MSB4018: at NuGet.Commands.MSBuildRestoreUtility.AddPackageTargetFallbacks(PackageSpec spec IEnumerable1 items) [d:\\a\\_temp\\NuGetScratch\\temmko3j.dto.nugetinputs.targets] 2017-09-22T15:35:53.8340398Z d:\\a\\_temp\\NuGetScratch\\tspr1daf.vdl.nugetrestore.targets(131 5): error MSB4018: at NuGet.Commands.MSBuildRestoreUtility.GetPackageSpec(IEnumerable1 items) [d:\\a_temp\\NuGetScratch\\temmko3j.dto.nugetinputs.targets] 2017-09-22T15:35:53.8340398Z d:\\a_temp\\NuGetScratch\\tspr1daf.vdl.nugetrestore.targets(131 5): error MSB4018: at System.Linq.Enumerable.WhereSelectEnumerableIterator2.MoveNext() [d:\\a\\_temp\\NuGetScratch\\temmko3j.dto.nugetinputs.targets] 2017-09-22T15:35:53.8340398Z d:\\a\\_temp\\NuGetScratch\\tspr1daf.vdl.nugetrestore.targets(131 5): error MSB4018: at System.Linq.Enumerable.WhereEnumerableIterator1.MoveNext() [d:\\a_temp\\NuGetScratch\\temmko3j.dto.nugetinputs.targets] 2017-09-22T15:35:53.8340398Z d:\\a_temp\\NuGetScratch\\tspr1daf.vdl.nugetrestore.targets(131 5): error MSB4018: at NuGet.Commands.MSBuildRestoreUtility.GetDependencySpec(IEnumerable`1 items) [d:\\a_temp\\NuGetScratch\\temmko3j.dto.nugetinputs.targets] 2017-09-22T15:35:53.8340398Z d:\\a_temp\\NuGetScratch\\tspr1daf.vdl.nugetrestore.targets(131 5): error MSB4018: at NuGet.Build.Tasks.WriteRestoreGraphTask.Execute() [d:\\a_temp\\NuGetScratch\\temmko3j.dto.nugetinputs.targets] 2017-09-22T15:35:53.8340398Z d:\\a_temp\\NuGetScratch\\tspr1daf.vdl.nugetrestore.targets(131 5): error MSB4018: at Microsoft.Build.BackEnd.TaskExecutionHost.Microsoft.Build.BackEnd.ITaskExecutionHost.Execute() [d:\\a_temp\\NuGetScratch\\temmko3j.dto.nugetinputs.targets] 2017-09-22T15:35:53.8340398Z d:\\a_temp\\NuGetScratch\\tspr1daf.vdl.nugetrestore.targets(131 5): error MSB4018: at Microsoft.Build.BackEnd.TaskBuilder.d__26.MoveNext() [d:\\a_temp\\NuGetScratch\\temmko3j.dto.nugetinputs.targets] 2017-09-22T15:35:53.8340398Z 2017-09-22T15:35:53.8750823Z NuGet.CommandLine.ExitCodeException: Exception of type 'NuGet.CommandLine.ExitCodeException' was thrown. 2017-09-22T15:35:53.8750823Z at NuGet.CommandLine.MsBuildUtility.d__6.MoveNext() 2017-09-22T15:35:53.8750823Z --- End of stack trace from previous location where exception was thrown --- 2017-09-22T15:35:53.8750823Z at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw() 2017-09-22T15:35:53.8750823Z at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task) 2017-09-22T15:35:53.8750823Z at NuGet.CommandLine.RestoreCommand.d__48.MoveNext() 2017-09-22T15:35:53.8750823Z --- End of stack trace from previous location where exception was thrown --- 2017-09-22T15:35:53.8750823Z at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw() 2017-09-22T15:35:53.8762943Z at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task) 2017-09-22T15:35:53.8762943Z at NuGet.CommandLine.RestoreCommand.d__43.MoveNext() 2017-09-22T15:35:53.8770357Z WARNING: Error reading msbuild project information ensure that your input solution or project file is valid. NETCore and UAP projects will be skipped only packages.config files will be restored. 2017-09-22T15:35:54.0700174Z Restoring NuGet package Microsoft.ServiceFabric.5.7.198.

All the builds work without issue on our local machines under both Release and Debug build.

Any help would be greatly appreciated.

Build logs can be downloaded here.

""",azure-devops +49280325,"How does traffic flow from Azure virtual machine to data dog

I have installed data dog agent on one of my virtual machines When I have altered my NSG so that all \Outbound-connections\"" are denied I am still able to see \""CPU metric\"" getting updated on Data dog dashboard. I would like to know where this information is going from Azure to Datadog.

""",azure-virtual-machine +42104647,"Creating a folder using Azure Storage Rest API without creating a default blob file

I want to create following folder structure on Azure:

mycontainer    -images        --2007            ---img001.jpg            ---img002.jpg 

Now one way is to use PUT Blob request and upload img001.jpg specifying the whole path as

PUT \mycontainer/images/2007/img001.jpg\""

But I want to first create the folders images and 2007 and then in a different request upload the blob img001.jpg.

Right now when I tried to doing this using PUT BLOB request:

StringToSign:

PUT            x-ms-blob-type:BlockBlob x-ms-date:Tue  07 Feb 2017 23:35:12 GMT x-ms-version:2016-05-31 /account/mycontainer/images/ 

HTTP URL

sun.net.www.protocol.http.HttpURLConnection:http://account.blob.core.windows.net/mycontainer/images/ 

It is creating a folder but its not empty. By default its creating an empty blob file without name.

Now a lot of people say we can't create a empty folder. But then how come we can make it using the azure portal as the browser must be sending some type of rest request to create the folder.

I think it has to do something with Content-Type i.e. x-ms-blob-content-type which should be specified in order to tell azure that its a folder not a blob. But I am confused.

""",azure-storage +52404343,Rename Virtual Machine Managed Disks on Microsoft Azure through PowerShell

How to Rename Virtual Machine Managed Disks on Microsoft Azure through PowerShell?

I tried with

Update-AzureRmDisk  

But I'm not able to change the name

How can I do that?

,azure-virtual-machine +30925194,"Azure blobs - cannot display image store in blob container in MVC 4

Question background:

I have a basic MVC 4 site and am trying to display a picture that is stored in a blob storage container in my Azure cloud account. I store a list of image Uri's in a string List and pass these to a View where they should display

The Issue:

I can't seem to get the file name of the image stored in the container.

The following image shows the container in Azure. Note that there is an image stored in it with a size of 101.28KB:

\""enter

This is the code I am trying to use to retrieve the blobs and then read the image Uri's:

AzureStorageController with a Pics Action Method:

 public ActionResult Pics()     {         var imageList = new List<string>();          var imagesAzure = new myBlobStorageService();          var container = imagesAzure.GetCloudBlobContainer();          foreach (var blobItem in container.ListBlobs())         {             imageList.Add(blobItem.Uri.ToString());         }         return View(imageList);     } 

The GetCloudBlobContainer Method of the myBlobStorageService class:

 public CloudBlobContainer GetCloudBlobContainer()     {         string accountName = \""fmfcpics\"";          string accountKey = \""xxxxxx/yyyyyyyyy/3333333333==\"";          StorageCredentials credentials = new StorageCredentials(accountName  accountKey);          CloudStorageAccount storageAccount = new CloudStorageAccount(credentials  true);          CloudBlobClient blobClient = storageAccount.CreateCloudBlobClient();          CloudBlobContainer container = blobClient.GetContainerReference(\""images\"");          if (container.CreateIfNotExists())         {             container.SetPermissions(new BlobContainerPermissions() { PublicAccess = BlobContainerPublicAccessType.Blob });         }          return container;     } 

The Pics View:

@{ ViewBag.Title = \""Pics\""; }  <h2>Pics</h2>  @foreach (var item in Model) {     <img src=\""@item\"" alt=\""picture\"" width=\""200\"" height=\""200\"" /> } 

The Uri that is in the list that is being passed to the Pics View is: https://fmfcpics.blob.core.windows.net/images/myblob but it does not feature the image file name.

Any help with this would be appreciated.

""",azure-storage +57408424,"How to trigger your automation script from VSTS and get the test execution result back in VSTS

I have a automation script which built in Java/selenium. I want to run my test cases from VSTS by triggering my script and also want to get the result back in VSTS. Can anybody give me the path how can I make that happen. Also where should I keep by project?

I was doing research on it. But doesn't make sense to me https://docs.microsoft.com/en-us/azure/devops/pipelines/test/continuous-test-selenium?view=azure-devops

""",azure-devops +55915618,"PublishTestResults with suite-level message output

I am well aware that what I am looking to do is not currently possible so I am looking for a workaround.


First of all assume a microservice-based highly modular application system. Each microservice has its own unit tests that are run in its build pipeline the results are published to Azure DevOps - no problems at this point.

However there is a need to run some integration (almost end-to-end) tests (let's call it the integration test suite) on a properly deployed application system. That is done on a designated environment (let's call it the integration test environment). The test suite would be executed in several different scenarios so the output needs to reflect that (this simply means that if there are 4 scenarios the test result output file will have 4 test suites).

Aside from that there must be a directly visible link between application system releases and their test results.

In order to achieve this there is a release pipeline where the first three stages are: Deploy to DEV Deploy to INT and Run Integration Tests.

Naturally the PublishTestResults task is used to present a nice overview of the integration test results. The integration test orchestrator application is generating one JUnit XML/scenario and at the end of testing they are made available to the pipeline.


With that said here comes the problem: there is a need to present certain statistics about the tests run in each scenario/suite (more interesting stuff than X/Y tests succeeded in Z seconds).

I know that Jenkins CI seem to follow the JUnit XML format described by Dirk Jagdmann namely that a <testsuite> element can have a <system-out> element and this would appear on the test suite visualization.

Unfortunately (!) Azure DevOps PublishTestResults task only reads <system-out> under individual <testcase> elements (the link above shows the result format mapping). The irony is that the test suite and test case blades are rendered mostly the same - they both have the Debug section where you have an accordion with Error message Stack Trace etc. (which will not be read/populated for <testsuite> though).

The same applies to JUnit format extensions that would allow referencing extra attachment files in the XML.

Now if this were a build pipeline I would not care and just use the PublishBuildArtifacts task to add some .TXT attachments (one per scenario/suite) and call it done. However this task cannot be used in a release pipeline.


I do need the extended summary/statistics per suite and I cannot have them somewhere in the XML which can be downloaded after taking several possibly obscure steps. This automatically rules out uploading them somewhere else - if the CI/CD is in Azure DevOps then everything about it should stay there.

I have full control over the test result XML generation.

""",azure-devops +44162372,"Azure Queue Storage triggered without removing message

How can I keep a message in the \function App\"" until I decide to remove it?

When I build a console app in c# I can decide when I remove the message that I read with:

queue.DeleteMessage(msg); 

I'm able to read the queue automatically with this procedure: functions-create-storage-queue-triggered-function.

The problem is like Azure said:

  1. Back in Storage Explorer click Refresh and verify that the message has been processed and is no longer in the queue.

In this context I can't remove the message by myself when the function is done.

I tried to Throw new Exception(\""failed\""); to simulate a failed function but the message is removed anyway.

I'm looking to keep this message in queue until I decide to remove it (At the end of the function).

""",azure-functions +42445686,"Access Azure Function runtime settings

Is it possible to access/update the Web.config used by WebJobs.Script.WebHost or the func.exe.Config (in C:\\Users\\{user}\\AppData\\Local\\Azure.Functions.Cli\\1.0.0-beta.91\\)?

When I create an Azure Function using the Consumption plan and browse the file share I do not see either of these files in any of the directories but I'm assuming that the runtime is getting the settings from one of these files or something similar.

Essentially I would like to remove the standard .NET headers being returned by setting some values in one of these .config files.

""",azure-functions +14969972,"Using Azure Storage Tables as Queues with multiple Worker Roles processing it?

My application will be receiving 1000+ requests/transactions every second via multiple instances of the Web Role. These roles will write a record for every transaction across multiple Storage Tables (randomly to spread Azure's 500 transactions/sec limit). Now I need a reliable way to process/aggregate this data using multiple Worker Roles and write the results to my SQL database. AKA this needs to scale horizontally.

I need to retain/archive all of the transactions in the Storage tables post-processing so I could go with having one set of tables for queues and when they are processed move them onto the archive tables or perhaps there is a way to do this on a single table not sure.

What would you recommend as far as a mechanism to distribute the current workload in these queues across my Work Roles? Obviously each role has to be aware of what every other role is working on so they only work on unclaimed transactions. I think each role will be retrieving 1000 records from the queue as a single work load and multiple worker roles could be working on the same queue.

Should I keep the Worker Roles \state\"" in a Cache perhaps in SQL server.

Your suggestions are much appreciated.

""",azure-storage +38950821,"I am unable to open Azure's VM port

What i have tried so far...

  1. Created azure VMs both on Classic and ARM.
  2. Created end points of classic and ARM machine(NSG) port:9000
  3. Open allow port 9000 in firewall on Windows Server R2 Datacenter
  4. Check port status on check-host.net
  5. Default Port status (Remote Desktop) is open other ports are closed.

This is how i have created my end point in azure classic VM & Make New Firewall Inbound Outbound Rules.

\""enter

Test Result of My Custom Port (Closed) & Remote Desktop Port (Open):

(I'm going to add my second image as a link in the comments)

Sorry for improper way of screenshots..actually i am new here so i can post only upto two links.

""",azure-virtual-machine +48712764,"Max Pool Size ignored working with Azure SQL and Azure AppServices

I'm working in a ASP.NET Web API project (Full .NET Framework 4.6.1) and using Azure SQL Database the API is deployed on an Azure AppService. Regarding the service tiers we are using S2 in the case of the Azure SQL Database (50 DTU) and B1 in the case of the AppService where is deployed the API endpoint (1 Core and 1.75 GB of RAM). At this moment we are using 2 instances (2 VM with load balancer)

Our QA team is trying to find out the capacity of the platform in terms of performance. They have configured a performance test with JMeter which consist on launching 4000 requests during an interval of 60 seconds.

After the first executions of the performance tests the ratio of HTTP 500 errors was very high after taking a look to the logs we found a lot of exceptions like this:

System.InvalidOperationException: Timeout expired.  The timeout period elapsed prior to obtaining a connection from the pool.  This may have occurred because all pooled connections were in use and max pool size was reached.    at System.Data.Common.ADP.ExceptionWithStackTrace(Exception e) --- End of stack trace from previous location where exception was thrown ---    at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()    at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)    at System.Data.Entity.SqlServer.DefaultSqlExecutionStrategy.<>c__DisplayClass4.<<ExecuteAsync>b__3>d__6.MoveNext() --- End of stack trace from previous location where exception was thrown ---    at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()    at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)    at System.Data.Entity.SqlServer.DefaultSqlExecutionStrategy.<ExecuteAsyncImplementation>d__9`1.MoveNext() --- End of stack trace from previous location where exception was thrown ---    at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()    at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)    at System.Data.Entity.Core.EntityClient.EntityConnection.<OpenAsync>d__8.MoveNext() 

The first thing I thought was on a connection leak issue we were reviewing the code and monitoring the connections on SQL Server using the sp_who2 command but the connections was being disposed as expected.

We are using an injection container that is creating a Entity Framework context (queries are async) each time a new request must be processed the Entity Framework context is disposed automatically when the request ends (scoped dependencies).

The conclusion we reached was that we needed to increase the size of the connection pool to mitigate the timeouts in scenarios with huge traffic load.

Doing a quick search in the internet I found out that the default value of the Max Pool Size value is 100:

https://www.connectionstrings.com/all-sql-server-connection-string-keywords/

I decided to increase the value to 400:

Server=tcp:XXXX.database.windows.net 1433;Initial Catalog=XXXX;Persist Security Info=False;User ID=XXXX;Password=XXXXXXXXXXXX;MultipleActiveResultSets=False;Encrypt=True;TrustServerCertificate=False;Max Pool Size=400; 

After repeating the performance test our surprise was that we didn't notice any improvement since we were receiving the same ratio of HTTP 500 errors. We reached the conclusion that the Max Pool Size was being ignored.

The next thing we did was to monitor the SQL Server during the performance test in order to find out how many sessions was opened from each host process at this moment we are using the following SQL sentence for this purpose:

SELECT         COUNT(*) AS sessions   host_name   host_process_id   program_name   DB_NAME(database_id) AS database_name FROM             sys.dm_exec_sessions AS s WHERE         (is_user_process = 1) AND  (program_name = '.Net SqlClient Data Provider') GROUP BY host_name  host_process_id  program_name  database_id ORDER BY sessions DESC 

After monitoring the opened sessions by each host process (the virtual machines where is deployed the API endpoint) we found out that only 128 database sessions was being created from each virtual machine.

At this point several options comes up to my mind that it could explain such weird behaviour:

  • Bearing in mind that the connection pooling is a concept that belongs to the client side the first thing that I thought was that some kind of parameter in the IIS Application Pool was being responsible of such behaviour.
  • Another option it would be that only 128 sessions can be opened by each host process and database login. I didn't find anything in the internet that points to this .. but in other databases like Oracle this constraint can be configured in order to limit the amount of sessions opened by each login.
  • The last option .. in some blogs and stackoverflow threads I have read that the exception that we are receiving (The timeout period elapsed prior to obtaining a connection from the pool. This may have occurred because all pooled connections were in use and max pool size was reached) can be misleading and exists the possibility that other problem is causing the exception ..

The quick solution it would be to disable the pooling in the connection string but this is the last thing that I would do ..

Another solution it would be to scale out the AppService in order to add more VM instances but this is expensive in terms of money ..

Anyone knows if exists some kind of limitation in the Azure AppServices which explains why only 128 sessions are opened when the connection pooling is enabled?

""",azure-web-app-service +38396653,"Multiple VM Creation by ARM Powershell approach

I have a ps workflow(.psm file) where I am trying to create 5 vms in parallel. I am using ARM cmdlets.I am getting an error-

Error- Cannot validate argument on parameter 'SubnetId'. The argument is null or empty. Provide an argument that is not null or empty and then try the command again.

Here is my challange-

  1. Even if I remove -parallel parameter from foreach even then its not making any difference.

  2. If I run the same code NOT inside a workflow(ps1 file) removing -parralel parameter I am able to create 5 vms

Code-

workflow Create-VMs {     $UserName = \abc@cde.onmicrosoft.com\""     $pwd = ConvertTo-SecureString \""xxxxxxxx\"" -AsPlainText -Force     $AzureCredential = New-Object System.Management.Automation.PSCredential($UserName  $pwd)      login-azurermaccount -credential $AzureCredential     Add-AzureRmAccount -Credential $AzureCredential     Select-AzureRmSubscription -SubscriptionName \""xxxxx\""      $virtualNetworkName = \""myvpn\""     $locationName = \""East US\""     $ResourceGroupName = \""myrg\""     $user = \""adminuser\""     $password = \""AdminPass123\""     $VMSize = \""Standard_D2\""     $sourcevhd = \""https://abc.blob.core.windows.net/vhds/windowsserver2008.vhd\""     $virtualNetwork = Get-AzureRmVirtualNetwork -ResourceGroupName $ResourceGroupName -Name $virtualNetworkName      foreach -parallel($i in 1..5)     {         $VMName = \""myname\"" + $i         $destinationVhd = \""https://abc.blob.core.windows.net/vhds/windowsserver2008\"" + $i + \"".vhd\""         $staticip = \""dynamicip\"" + $i         $virtualNetwork = Get-AzureRmVirtualNetwork -ResourceGroupName $ResourceGroupName -Name $virtualNetworkName         $publicIp = New-AzureRmPublicIpAddress -Name $staticip -ResourceGroupName $ResourceGroupName -Location $locationName -AllocationMethod Dynamic         $networkInterface = New-AzureRmNetworkInterface -ResourceGroupName $ResourceGroupName -Name $VMName -Location $locationName -SubnetId $virtualNetwork.Subnets[0].Id -PublicIpAddressId $publicIp.Id         $vmConfig = New-AzureRmVMConfig -VMName $VMName -VMSize $VMSize         $vmConfig = Set-AzureRmVMOSDisk -VM $vmConfig -Name $VMName -VhdUri $destinationVhd -CreateOption FromImage -Windows -SourceImageUri $sourcevhd         $vmConfig = Add-AzureRmVMNetworkInterface -VM $vmConfig -Id $networkInterface.Id         $securePassword = ConvertTo-SecureString $password -AsPlainText -Force         $cred = New-Object System.Management.Automation.PSCredential ($user  $securePassword)          Set-AzureRmVMOperatingSystem -VM $vmConfig -Windows -Credential $cred -ProvisionVMAgent -ComputerName $VMName         New-AzureRmVM -VM $vmConfig -Location $locationName -ResourceGroupName $ResourceGroupName     } } 

Not able to find out what is the actual problem. Any other approach for creating multiple vms in parallel using ARM ?

""",azure-virtual-machine +57555223,Azure build pipeline publish to specific folder

I have an Azure build pipeline and a Publish task:

- task: DotNetCoreCLI@2   displayName: 'dotnet publish'   inputs:     command: 'publish'     projects: '**/MyProj.csproj'     arguments: '--configuration $(buildConfiguration) --output $(build.artifactstagingdirectory)\webjob\App_Data\jobs\triggered\MyProjWebJob /p:AssemblyVersion=$(GitVersion.AssemblySemVer)'     zipAfterPublish: false     publishWebProjects: false 

As you can see I am specifying an output directory - it is for a webjob and I have to have this format. The problem is that the publish adds another directory with the name of the project:

$(build.artifactstagingdirectory)\webjob\App_Data\jobs\triggered\MyProjWebJob\my.proj.webjob 

and it puts all the artifacts in there but I do not want this additional directory - when I deploy it can't be there or the webjob isn't accessible. How do I publish without creating this directory?

,azure-devops +55144062,"Azure Web App with Acitve Directory Express with Graph API to get user photo

My Azure Web App has Active Directory enabled using the Express option. I can get the user claims/user's name from auth.me. How do I then get the user's photo/avatar? The token I get is not working in a Graph API call. I get this error from Graph API. Here is my code.

Please help! Spent hours searching and reading docs but nothing seems to address the Express AD scenario. Thanks Donnie

{ \error\"": { \""code\"": \""InvalidAuthenticationToken\""  \""message\"": \""CompactToken parsing failed with error code: 80049217\""  \""innerError\"": {   \""request-id\"": \""e25f1fe5-4ede-4966-93c2-6d92d34da6ae\""    \""date\"": \""2019-03-13T14:13:26\"" } } }  axios.get('/.auth/me').then(resp => {        if(resp.data){            loggedInUser = {             accessToken:resp.data[0].access_token              userId: resp.data[0].user_id              username: resp.data[0].user_claims[9].val              lastname: resp.data[0].user_claims[8].val              fullname: resp.data[0].user_claims[11].val              avatar:'https://cdn.vuetifyjs.com/images/lists/1.jpg'           }            let config = {             'headers':{               'Authorization': 'Bearer ' + loggedInUser.accessToken             }           }             axios.get('https://graph.microsoft.com/v1.0/me/photos/48x48/$value' config).then(resp => {             let photo = resp.data;             const url = window.URL || window.webkitURL;             const blobUrl = url.createObjectURL(photo);             document.getElementById('avatar').setAttribute(\""src\""  blobUrl);             loggedInUser.avatar = blobUrl;             console.log(blobUrl)           });        }       }) 
""",azure-web-app-service +54672439,Azure DevOps - Display name Updates not being updated in Work Items

I'm having some problems updating the Display name from users.

The point is we have some other systems which uses the name as from AD to locate users on Azure DevOps Online (I know name should bot be used as a valid key unfortunately we have no control over those old systems...)

The point is we have a thousand users with different display names from AD in our Azure DevOps because users can put anything they want to...

I started my tests by changing my own display name from my profile for my surprise it doesn't change anywhere else there is if I come back to my profile it is changed but on every WI it remains the old one should it not have changed on Work Items?

The second question would be if there's an easy way to bulk change the Display Name for all users I have found a couple of examples but they are from 2010 and I'm not sure it would work on Azure DevOps Online Version

,azure-devops +45106071,"How to disable RC4 cipher in Azure VM Scaleset

I have a VM scale set with this image:

Publisher: MicrosoftWindowsServer Offer: WindowsServer SKU: 2016-Datacenter-with-Containers Version: latest 

These machines are running SSL web endpoint hosted in service fabric. The website is build in dotnetcore with a WebListener which propably uses the http.sys

I was wondering why new VM images still supports RC4 ciphers and how to disable them. I don't want to do it manually because that will break to autoscaling.

Similar issue but then for Worker roles: How to disable RC4 cipher on Azure Web Roles

""",azure-virtual-machine +57286438,"How to authenticate to Azure database with the users credentials not the web apps

I have an ASP.Net MVC web application that connects to an azure sql database. I have an account set up on that database using my AAD login. When I run locally (localhost) the web application loads fine and my credentials are authenticated successfully and I am able to query the database. When i publish the application to an app service on the cloud i am unable to authenticate on the database.

I followed this tutorial https://docs.microsoft.com/en-us/azure/app-service/app-service-web-tutorial-connect-msi initially which I understand authenticates as the app itself once published (I've proved this by registering the app to AAD and adding the Application API login to the Database)

What i really want is a way to authenticate as the user of the app not the app itself - i.e. An Azure version of Kerberos which we currently use for our on-prem applications

""",azure-web-app-service +32376349,"How are the \Cloud Services\"" created for Virtual Machines and Azure Cloud Services related?

When you create an Azure VM it has to be placed into a Cloud Service (either new or existing).

Is that the exact same logical structure as an Azure Cloud Service that's created when deploying Web and Worker roles via Visual Studio?

Can I deploy Roles from VS into a Cloud Service created via VM creation? I can deploy a VM into a Cloud Service created via VS deployment? If either of those are true how does that \""free-standing\"" VM relate to the Role VMs? Is it just floating inside the Cloud Service independently from the Role VMs?

Thanks in advance!

""",azure-virtual-machine +16127953,Windows Azure How to make custom registration process in Android App

I am using windows azure services for my android app In this windows azure provide authentication from Facebook Google Twitter etc.

But i want custom registration for user and when user successfully register in app then they can also log in with there user id and pass that they created at registration time.

How is it possible using windows azure in android app.

,azure-storage +51564708,"How to Import NetworManangementClient in Azure using Python?

I want to obtain the public IP of a NIC interface in Azure using Python SDK. So I need to import NetworkManagemenClient.

But when I do the following:

\from azure.mgmt.compute import NetworkManagementClient\"" or \""from azure.mgmt.resource import NetworkManagementClient\""

I am not able to import.

Any fixes?

""",azure-virtual-machine +35876272,Azure vm quick-start isn't linking the created IP address to the vm

When I use the vm quick-start to spin up a new Linux vm it shows the public ip address being provisioned however it is not linked to the vm itself. If I go into the portal I can see that the VM and IP exist but must manually allow the IP to link to the VM. Why is this happening?

,azure-virtual-machine +21624860,"Windows Azure Storage emulator error occured

I was trying to install the Windows Azure SDK when this shows up:

 \System.NullReferenceException: Object reference is not set to an instance      of an object at MicrosoftHosting.DevelopmentStorage.Tools.DSInit.Controller.PerformActions() at     Microsoft.ServiceHosting.DevelopmentStorage.Tools.DSInit.DSInitProgram.Run(string args here)     at Microsoft.WindowsAzure.DSinit.Program.Main(string args here)\"" 

I downloaded the Windows Azure SDK for vs2013 through web platform and this error shows up. but then i downloaded the separate program again but then same error shows up.can someone please help me identify the root cause of this? Thanks

""",azure-storage +52663175,How to manually start $(Rev:r) counter from specific number?

Say I have a library which is already version 1.0.15

I migrate my build process to Azure DevOps and want auto increment of build number. So in the build pipeline options I set Build number format to 1.0.$(Rev:r).

But now it starts making builds at 1.0.1

So how do I artificially increment this to 15?

,azure-devops +32470159,"How to get php include_path to work on a custom directory '/var/custom_directory'?

I thought php include path was a pretty simple concept. I've done it many times but now am having trouble getting it to work.

I am running an

centos 7.1 server on azure with Apache/2.4.6 PHP/5.4.16

When I modify the include_path within php.ini. The error after restarting apache shows

Failed opening required 'xfile' (include_path='.:/var/custom_directory')...

The include path is in the proper file format.

I might have an ownership problem.

I can place my files inside the default usr/share/php directory and the pages \include\"". However when I try to put my own directory inside of /var they do not.

I have done this before so I do not know why it isn't working now. I have chowned and chmod these directories and their contents to death. Even mimicing the server that works's directories. Switching ownership to apache and giving full grant access just trying to get it to see the file from my /var/www/html/index.php

Am I missing something? Is there something I need to enable or grant access or modify in the php or http.conf files?

Further information: This is the only php.ini file included in the system /etc/php.ini

The purpose of the include path is to provide coding / user files behind the firewall.

I don't think this maters but my vm is in Azure.

Putting something like this

include('/var/custom_directory/file.php');  

Doesn't work as well. Why?

""",azure-virtual-machine +46474941,Azure functions storage recommandation

I'm using Azure functions as micro services. I have 7 functions apps running in an App Service. What is the recommandation about the storage (AzureWebJobsStorage & AzureWebJobsDashboard) ?

Should I create 2 storage accounts (1 for AzureWebJobsStorage & 1 for AzureWebJobsDashboard) for each function app or can I share ?

,azure-functions +55273472,How to create a self Time triggered Function?

Looking for some self triggering function which starts on a time given.

eg. A user has a conference to start at 25/04/2019 05:30 Here the user should get a notification at 25/04/2019 05:25 | 05:29 That the conference is about to start.

Have created a azure function (Time triggred) which triggers every minute and checks if current time is the Conference Time - 4 minutes Then send a notification regarding conference to be started.

In future will have multiple users and so I do not want the function to run every minute Is there a way in which at 05:25 or at the conference time the function will execute itself. So there can be 100 users and they will have different. Just looking for options about how to implement in a better way.

.net core site hosted on azure Have azure functions running every minute to check the remainder

,azure-functions +50037032,"Azure function failed to deploy with the error ERROR_CONNECTION_TERMINATED

Issue Description

I am trying to deploy the azure function using visual studio and getting the error: Web deployment task failed (Complete error logs are given below). I have followed all the steps mentioned in the article http://techgenix.com/video-publish-azure-functions-azure-using-visual-studio-tools-azure-functions/ but still facing the issue. Please help me as I have spent a lot of time on this issue and didn't get any solution.

Fiddler was not running when I deployed the azure function using visual studio.

Error logs

1>------ Build started: Project: TestFunction  Configuration: Release Any CPU ------ 1>TestFunction -> C:\\Repo\\TestFunction\\TestFunction\\bin\\Release\\net461\\bin\\TestFunction.dll ========== Build: 1 succeeded  0 failed  0 up-to-date  0 skipped ==========   Publish Started   TestFunction -> C:\\Repo\\TestFunction\\TestFunction\\bin\\Release\\net461\\bin\\TestFunction.dll   TestFunction -> C:\\Repo\\TestFunction\\TestFunction\\obj\\Release\\net461\\PubTmp\\Out\\ C:\\Program Files\\dotnet\\sdk\\2.1.300-preview1-008174\\Sdks\\Microsoft.NET.Sdk.Publish\\build\\netstandard1.0\\PublishTargets\\Microsoft.NET.Sdk.Publish.MSDeploy.targets(139 5): error : Web deployment task failed. (Web Deploy experienced a connection problem with the server and had to terminate the connection.  Contact your server administrator if the problem persists.  Learn more at: http://go.microsoft.com/fwlink/?LinkId=221672#ERROR_CONNECTION_TERMINATED.) [C:\\Repo\\TestFunction\\TestFunction\\TestFunction.csproj] C:\\Program Files\\dotnet\\sdk\\2.1.300-preview1-008174\\Sdks\\Microsoft.NET.Sdk.Publish\\build\\netstandard1.0\\PublishTargets\\Microsoft.NET.Sdk.Publish.MSDeploy.targets(139 5): error :  [C:\\Repo\\TestFunction\\TestFunction\\TestFunction.csproj] C:\\Program Files\\dotnet\\sdk\\2.1.300-preview1-008174\\Sdks\\Microsoft.NET.Sdk.Publish\\build\\netstandard1.0\\PublishTargets\\Microsoft.NET.Sdk.Publish.MSDeploy.targets(139 5): error : Web Deploy experienced a connection problem with the server and had to terminate the connection.  Contact your server administrator if the problem persists.  Learn more at: http://go.microsoft.com/fwlink/?LinkId=221672#ERROR_CONNECTION_TERMINATED. [C:\\Repo\\TestFunction\\TestFunction\\TestFunction.csproj] C:\\Program Files\\dotnet\\sdk\\2.1.300-preview1-008174\\Sdks\\Microsoft.NET.Sdk.Publish\\build\\netstandard1.0\\PublishTargets\\Microsoft.NET.Sdk.Publish.MSDeploy.targets(139 5): error : Root element is missing. [C:\\Repo\\TestFunction\\TestFunction\\TestFunction.csproj]   Publish failed to deploy. 
""",azure-functions +55561689,"How ti fix the \You do not have permission to view this directory or page.\"" issue in Azure web app services?

I have a simple web app which if I deploy to Azure through Intelij (Using Azure App plugin ) the app works perfectly. But when I tried deploying using Jenkins the log says deployment successful but when I try to navigate to the site it says \""You do not have permission to view this directory or page.\"" Am I missing anything ?

As per my understanding as My project is working fine when deployed using Intelij but not working through Jenkins so the problem will be in my Jenkins Job. Here is the configuration I am using : Publish Files Files :target\\spring-mvc-example.war Source Directory(optional):target Target Directory(optional): webapps

""",azure-web-app-service +49451858,"Azure PowerShell return the Name of CustomScriptExtension if it exists

I have created a variable called $VMStatus

$VMStatus = Get-AzureRmVM -ResourceGroupName $RGName -VMName $VMName -status 

Now when I run $VMStatus.Extensions.Type it returns the list of virtual machine extensions for the provided entries.

So now when I run $VMStatus.Extensions.Type -Match \Custom\"" it returns the entry I am interested in: Microsoft.Compute.CustomScriptExtension

The problem I am having is getting the Name of that CustomScriptExtension. I have tried the following with no success:

IF ($VMstatus.Extensions.Type -Match \""Custom\"") {$VMstatus.Extensions.Name} 

This will actually return ALL entries for Name since the first part of the IF statement is TRUE.

How do I return just the Name of the CustomrScriptExtension if one exists?

""",azure-virtual-machine +53396177,"how to create VM with Json arm template using define VHD (or disk snapshot)?

The question in the title: \how to create VM with Json arm template using define VHD (or disk snapshot) in azure?\"" Thanks for your help!

""",azure-virtual-machine +52987660,vsts-agent in OpenShift

I run the agent in my machine and it works just perfect. Then I pushed my image to openshift then I got some permission denied issue: ./start.sh: line 19: /vsts/.token: Permission denied

I tried to find how to give these permissions within OpenShift but I was not able to find yet someone has any idea?

,azure-devops +43606096,"Unable to find free machine on self-provisioned load test rig

We have recently been performing load testing as part of a build(using the Cloud Load Test build task) using a self-provisioned load testing rig deployed using the following quick start template -

https://github.com/Azure/azure-quickstart-templates/tree/master/101-vsts-cloudloadtest-rig

This has been working well for us but something seems to have changed and this process no longer works. When the load test task starts we now get the following error:

2017-04-24T14:32:07.4831251Z [Message]This load test was run using self-provisioned rig 'default'. No virtual-user minutes (VUMs) will be charged for this run. 2017-04-24T14:32:07.4881254Z ##[error]Microsoft.PowerShell.Commands.WriteErrorException: Test run could not be started using the self-provisioned rig 0ebc4aad-33b2-495e-a75a-213d4607976b. Number of free machines available in the rig are less than the required number. (Requested – 1  Available - 0  In-Use – 0  Offline – 3). 

Using the ManageVSTSCloudLoadAgent.ps1 script

https://blogs.msdn.microsoft.com/visualstudioalm/2016/08/23/testing-privateintranet-applications-using-cloud-based-load-testing/

I can see that there is an agent group called \""LoadTesting\"" with my two provisioned VM's in it which shows them both as Free. However the GUID for this LoadTesting group does not match the one in the error message that the build task is attempting to use. According to the script there is only one rig available so I dont know where the cloud task is getting this other one from.

How can I change the task to use the correct group? Or change the 'LoadTesting\"" group to be the default?

I can't find anywhere within the load test definitions or through the team services site where I can make changes to to which rig it uses.

""",azure-devops +45588260,How to automatically restart an app service after certain time?

How to automatically restart an app service after 24 hours? How to schedule the app service to restart automatically at a specific time through the use of web jobs?

,azure-web-app-service +30883365,'Your credentials did not work' in MS Azure

I just created an Azure VM using the Windows 8.1 image in the Marketplace. During the creation process I provided a username and password.

After the VM has been created I press connect and try and login via MSTSC - using the credentials that I just entered (with a slash to remove the domain).

But I keep getting 'Your credentials did not work'. What have I done wrong? This procedure has worked for me in the past.

Furthermore when I review the users of the VM through the portal I only see 'Subscription admins' containing my Microsoft ID. I can't login using my Microsoft ID either.

,azure-virtual-machine +53062671,"Azure Functions: Can I have different configuration for BlobTriggered function?

I have a .Net project which contains multiple trigger in a same azure function project (a blob triggered function & a Queue triggered function).

I need a different concurrency for my blob triggered function from queue triggered function.

I know that the blob trigger uses a queue internally.

https://docs.microsoft.com/en-us/azure/azure-functions/functions-bindings-storage-blob#trigger---poison-blobs

Is there any way I can achieve it?

""",azure-functions +39199201,"Git stuck at deltafying objects

My source code ( with pictures ) size is ~500MB.
But I can't push it to any git repository: My git push attempt is stucked at \deltafying objects\"" in VS.
I watched my network in: ~3kb/s. out: 100kb/s. But still not pushing.

Here is a screenshot from visual studio:

\""enter

I tried in visual studio to push it Visual Studio Team Services (with git) didn't work. I tried sourcetree(v1.9.6.1) for checking it to bitbucket. Didn't work. I tried git console didn't work.

My Visual Studio Enterprise 2015 has \""Git-2.9.3.2-64bit\"" also this git version installed on my machine.

Update: More information: I tried on source-tree again here is console output;

git -c diff.mnemonicprefix=false -c core.quotepath=false push -v --tags --set-upstream LeanStartup master:master   POST git-receive-pack (163209032 bytes)   fatal: The remote end hung up unexpectedly   fatal: The remote end hung up unexpectedly   error: RPC failed; curl 56 SSL read: error:00000000:lib(0):func(0):reason(0)  errno 10054   Pushing to https://*****@bitbucket.org/*****/leanstartup.git   Everything up-to-date   Completed with errors  see above.   

Update 2: I tried on another solution ( just changed 1-2 things to commit and push). My git in Visual Studio & Source Tree is working well.
So maybe I need to suspect for one solution \""leanstartup\""?.
I tried to delete files: \"".gitattributes\"" \"".gitignore\"" and folder: \"".git\"" on solution folder to re-assign the git source control.
But again It hangs on \""deltafying objects\"".
Do you I need to delete more git data from somewhere else to clear all git-assigment on this project?

What can I do to fix this problem ?

""",azure-devops +9398159,"How to control azure storage account costs in Azure when using WAD tables

We do not use our Azure storage account for anything except standard Azure infrastructure concerns (i.e. no application data). For example the only tables we have are the WAD (Windows Azure Diagnostics) ones and our only blob containers are for vsdeploy iislogfiles etc. We do not use queues in the app either.

14 cents per gigabyte isn't breaking the bank yet but after several months of logging WAD info to these tables the storage account is quickly nearing 100 GB.

We've found that deleting rows from these tables is painful with continuation tokens etc because some contain millions of rows (have been logging diagnostics info since June 2011).

One idea I have is to \cycle\"" storage accounts. Since they contain diagnostic data used by MS to help us debug unexpected exceptions and errors we could log the WAD info to storage account A for a month then switch to account B for the following month then C.

By the time we get to the 3rd month it's a pretty safe bet that we no longer need the diagnostics data from storage account A and can safely delete it or delete the tables themselves rather than individual rows.

Has anyone tried an approach like this? How do you keep WAD storage costs under control?

""",azure-storage +18621787,"How to add a empty disk to a virtual machine under new storage location?

I'd like to add an empty data disk to a virtual drive under a new storage location. When I have the virtual machine select and click Add -> Empty Data Disk I'm not able to change the storage location. Does anyone know how to change the default storage location when adding an empty disk? Thanks

UPDATE

I believe this is the command that needs to be run. But running it on the vm doesn't do anything.

Get-AzureVM \service\"" -Name \""vmname\"" `| Add-AzureDataDisk -CreateNew -DiskSizeInGB 1000 -DiskLabel \""sqlsa1\"" -MediaLocation \""http://mystoragelocation.blob.core.windows.net/\"" -LUN 4 '| Update-AzureVM 
""",azure-virtual-machine +46641696,"Send Artifacts to external server

I'm a little lost between Custom Agent powersheel script etc.

I want to send artifacts (dll) to an external website (MVC) at the end of the build.

What the simpliest way to do it?

  • Make a new custom agent
  • Make a powershell script
  • Send a zip file to website with an existing task (\Publish Artifact\"" \""Upload with cURL\"" \""FTP Upload\"")

Regarding my skill I'm thinking about sending all artifacts to the website and then just make a call on my website like www.website.com/newArtifactUploaded.

But I'have no idea what is the best way to do it and how to do it. Do you have any suggestion idea or documentation / tutorial ?

""",azure-devops +48373669,"azure functions throw System.OperationCanceledException

My project is using Azure function send data to AWS SNS. However there is a Exception throw out roughly 1 per day. We process 400 000 data per day. Does this means the function running more than 5 mins and azure function canceled this thread or there is another reason? We monitor Azure function using application insight.

\System.Threading.CancellationToken.ThrowOperationCanceledException\"".  

Error log:

System.OperationCanceledException:    at System.Threading.CancellationToken.ThrowOperationCanceledException (mscorlib  Version=4.0.0.0  Culture=neutral  PublicKeyToken=b77a5c561934e089)    at Microsoft.Azure.WebJobs.ServiceBus.Listeners.NamespaceManagerExtensions+<CreateSubscriptionIfNotExistsAsync>d__4.MoveNext (Microsoft.Azure.WebJobs.ServiceBus  Version=2.1.0.0  Culture=neutral  PublicKeyToken=31bf3856ad364e35)    at System.Runtime.CompilerServices.TaskAwaiter.ThrowForNonSuccess (mscorlib  Version=4.0.0.0  Culture=neutral  PublicKeyToken=b77a5c561934e089)    at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification (mscorlib  Version=4.0.0.0  Culture=neutral  PublicKeyToken=b77a5c561934e089)    at Microsoft.Azure.WebJobs.ServiceBus.Listeners.ServiceBusSubscriptionListenerFactory+<CreateAsync>d__8.MoveNext (Microsoft.Azure.WebJobs.ServiceBus  Version=2.1.0.0  Culture=neutral  PublicKeyToken=31bf3856ad364e35)    at System.Runtime.CompilerServices.TaskAwaiter.ThrowForNonSuccess (mscorlib  Version=4.0.0.0  Culture=neutral  PublicKeyToken=b77a5c561934e089)    at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification (mscorlib  Version=4.0.0.0  Culture=neutral  PublicKeyToken=b77a5c561934e089)    at Microsoft.Azure.WebJobs.Host.Indexers.FunctionIndexer+ListenerFactory+<CreateAsync>d__5.MoveNext (Microsoft.Azure.WebJobs.Host  Version=2.1.0.0  Culture=neutral  PublicKeyToken=31bf3856ad364e35)    at System.Runtime.CompilerServices.TaskAwaiter.ThrowForNonSuccess (mscorlib  Version=4.0.0.0  Culture=neutral  PublicKeyToken=b77a5c561934e089)    at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification (mscorlib  Version=4.0.0.0  Culture=neutral  PublicKeyToken=b77a5c561934e089)    at Microsoft.Azure.WebJobs.Host.Listeners.HostListenerFactory+<CreateAsync>d__10.MoveNext (Microsoft.Azure.WebJobs.Host  Version=2.1.0.0  Culture=neutral  PublicKeyToken=31bf3856ad364e35)    at System.Runtime.CompilerServices.TaskAwaiter.ThrowForNonSuccess (mscorlib  Version=4.0.0.0  Culture=neutral  PublicKeyToken=b77a5c561934e089)    at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification (mscorlib  Version=4.0.0.0  Culture=neutral  PublicKeyToken=b77a5c561934e089)    at Microsoft.Azure.WebJobs.Host.Listeners.ListenerFactoryListener+<StartAsyncCore>d__8.MoveNext (Microsoft.Azure.WebJobs.Host  Version=2.1.0.0  Culture=neutral  PublicKeyToken=31bf3856ad364e35)    at System.Runtime.CompilerServices.TaskAwaiter.ThrowForNonSuccess (mscorlib  Version=4.0.0.0  Culture=neutral  PublicKeyToken=b77a5c561934e089)    at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification (mscorlib  Version=4.0.0.0  Culture=neutral  PublicKeyToken=b77a5c561934e089)    at Microsoft.Azure.WebJobs.Host.Listeners.ShutdownListener+<StartAsync>d__5.MoveNext (Microsoft.Azure.WebJobs.Host  Version=2.1.0.0  Culture=neutral  PublicKeyToken=31bf3856ad364e35)    at System.Runtime.CompilerServices.TaskAwaiter.ThrowForNonSuccess (mscorlib  Version=4.0.0.0  Culture=neutral  PublicKeyToken=b77a5c561934e089)    at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification (mscorlib  Version=4.0.0.0  Culture=neutral  PublicKeyToken=b77a5c561934e089)    at Microsoft.Azure.WebJobs.JobHost+<StartAsyncCore>d__25.MoveNext (Microsoft.Azure.WebJobs.Host  Version=2.1.0.0  Culture=neutral  PublicKeyToken=31bf3856ad364e35)    at System.Runtime.CompilerServices.TaskAwaiter.ThrowForNonSuccess (mscorlib  Version=4.0.0.0  Culture=neutral  PublicKeyToken=b77a5c561934e089)    at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification (mscorlib  Version=4.0.0.0  Culture=neutral  PublicKeyToken=b77a5c561934e089)    at Microsoft.Azure.WebJobs.Script.ScriptHostManager.RunAndBlock (Microsoft.Azure.WebJobs.Script  Version=1.0.0.0  Culture=neutral  PublicKeyToken=null: C:\\projects\\azure-webjobs-sdk-script\\src\\WebJobs.Script\\Host\\ScriptHostManager.cs: 184) 
""",azure-functions +43539498,Azure storage underlying technology

What is Azure storage made of the underlying storage technology which supports the Azure storage we access in azure portal?

Is it object based storage or block storage (persistent/ephemeral) similar to categorization in Ceph?

If there is a mix of block and object based which storage is used for each of exposed Azure storage service - block blob append blob page blob storage tables storage queues Azure files

,azure-storage +56794684,Azure Dev Ops Build Agent's builds and releases used

In Azure DevOps is it possible to see all of the builds and releases that are configured to use a particular agent pool? I can see the last 30 builds that have been associated with the agent but would like to see all the builds associated with the pool rather than have to check all of the agent configurations for the build stages. The agent is a self hosted agent as well if that makes a difference. I don't mind if it is through the UI or the REST api I get this data.

,azure-devops +40914738,"Migrate EC2 from AWS to Azure

We have a requirement to Migrate EC2 instance of AWS to Azure as VM have been trying to implement the same from this source unable to complete the process. Tried and stuck on Protection Group.

I'm looking in these other links

Migrating a VM from EC2 to Azure at 300 Mbps For this I'm able to create VM in Classis portal but unable connect to it only port 80 is active all other ports are not working \""See

Migrate virtual machines in Amazon Web Services (AWS) to Azure with Azure Site Recovery

https://docs.microsoft.com/en-us/azure/site-recovery/site-recovery-vmware-to-azure

https://aws.amazon.com/ec2/vm-import/ on trying this I'm getting this unresolved EC2 API export to S3 ACL issue

Can anyone suggest me a workflow on how to implement this?

""",azure-virtual-machine +54685039,How to fetch a file using nifi from a fileshare location?

I am trying to fetch some files from a fileshare location in a azure storage account using NiFi. I tried the fetchFTP processor but it is not being able to connect to the url of the fileshare location . I tried putting the correct parameters but I am getting an error.

Failed to fetch file on remote host due to java.net.UknownHostException:

,azure-storage +46102880,Azure powershell - how to get FQDN from a VM?

I am automating the deployment of several Azure VMs and I want to WinRM into each of them to finish the deployment. How do I find the FQDN of the VM from the script that creates it?

I'm looking for:

$psVirtualMachine = Add-AzureRmVMNetworkInterface -VM $psVirtualMachine -Id $Nic.Id  $CreateNewVM = New-AzureRMVM -ResourceGroupName $ResourceGroupName -Location $GeoLocation -VM $psVirtualMachine  Enter-PSSession -ConnectionUri https://<-- $CreateNewVM.FQDN -->:5986 -Credential $Cred -SessionOption (New-PSSessionOption -SkipCACheck -SkipCNCheck -SkipRevocationCheck) -Authentication Negotiate 
,azure-virtual-machine +51821295,"Using Kudu with multi-instance app service in Azure

In Azure I have a Linux Web App (for containers) running under an app service plan of 2 instances (set under the \Scale out\"" menu item). If I understood correctly this corresponds to my application being hosted on two separate VM instances in Azure.

If I then use the debug console in the Kudu interface (Developement Tools-> Advanced Tools) what am I actually logging on to ? Is it one of the VMs which hosts my Docker container ? If so why am I not prompted to choose the VM (seeing as I configured 2 in the plan) ?

""",azure-web-app-service +50319740,"Azure Functions - File not found

I'm following this guide to get Azure Functions installed and creating an app: https://docs.microsoft.com/en-us/azure/azure-functions/functions-run-local

When I run the command func init MyFunctionProj I should get an output like this:

Writing .gitignore Writing host.json Writing local.settings.json Created launch.json Initialized empty Git repository in D:/Code/Playground/MyFunctionProj/.git/ 

But instead I get this:

Writing .gitignore Writing host.json Writing local.settings.json Writing C:\\Users\\nahue\\dev\\MyFunctionProj\\.vscode\\extensions.json El sistema no puede encontrar el archivo especificado 

Translation: \""File not found\"".

I've already installed all the things mentioned in the link and it doesn't work. It creates that folder called \"".vscode\"" with that \""extensions.json\"" file inside and it doesn't create \""launch.json\"" I don't know why. I don't have Visual Studio nor VS Code installed in my computer.

""",azure-functions +57137042,"Flask Azure web app deployed successfully but showing default page

I deployed a python flask app with azure web service using local git. The status in the deployment center shows \success\"" but when i go to the web page it is still the default page that tells me I'm running python 3.6.6.

When i navigate to the kudu git clone uri it says \"" no route registered for '/testapp1.git'

The /wwwroot folder in kudu also has the following files.

env static (css folder) __pycache__ app.py hostingstart-python.html hostingstart-python.py index.html requirements.txt web.config 

A potential problem could be because the web.config file is still refering to the hostingstart-python.application.

<configuration>    <appSettings>       <add key =\""pythonpath\"" value=\""%systemDrive%home\\site\\wwwroot\"" />       <add key =\""WSGI_HANDLER\"" value=\""hostingstart-python.application\"" />    </appSettings> </configuration> 

I tried to follow the instructions on https://docs.microsoft.com/en-us/azure/app-service/containers/how-to-configure-python but this is for linux so i'm not sure what to do as i'm running Windows 10.

""",azure-web-app-service +20968594,Reset message visibility Azure Storage Queue

I have an azure storage Queue with many messages that were checked out (getQueueMessages) with a very long visibility timeout (>72hrs setVisibilityTimeoutInSeconds). The dequeue process crashed leaving of millions of messages stuck in the queue we have to wait now a long time until they expire and become visible in the queue again.

Is there a way to reset the visibility timeout for all messages in the queue that is to make all invisible messages visible again without having the pop receipt/id for each message?

,azure-storage +41174693,"Xamarin app crash when attempting to sync SyncTable

I making an app using xamarin and azure mobile service. I am attempting to add offline sync capabilities but I am stuck. I have a service which looks like this

class AzureService     {          public MobileServiceClient Client;          AuthHandler authHandler;         IMobileServiceTable<Subscription> subscriptionTable;         IMobileServiceSyncTable<ShopItem> shopItemTable;         IMobileServiceSyncTable<ContraceptionCenter> contraceptionCenterTable;         IMobileServiceTable<Member> memberTable;         const string offlineDbPath = @\localstore.db\"";           static AzureService defaultInstance = new AzureService();         private AzureService()         {             this.authHandler = new AuthHandler();             this.Client = new MobileServiceClient(Constants.ApplicationURL  authHandler);              if (!string.IsNullOrWhiteSpace(Settings.AuthToken) && !string.IsNullOrWhiteSpace(Settings.UserId))             {                 Client.CurrentUser = new MobileServiceUser(Settings.UserId);                 Client.CurrentUser.MobileServiceAuthenticationToken = Settings.AuthToken;             }              authHandler.Client = Client;              //local sync table definitions             //var path = \""syncstore.db\"";             //path = Path.Combine(MobileServiceClient.DefaultDatabasePath  path);              //setup our local sqlite store and intialize our table             var store = new MobileServiceSQLiteStore(offlineDbPath);              //Define sync table             store.DefineTable<ShopItem>();             store.DefineTable<ContraceptionCenter>();              //Initialize file sync context             //Client.InitializeFileSyncContext(new ShopItemFileSyncHandler(this)  store);              //Initialize SyncContext             this.Client.SyncContext.InitializeAsync(store);              //Tables             contraceptionCenterTable = Client.GetSyncTable<ContraceptionCenter>();             subscriptionTable = Client.GetTable<Subscription>();             shopItemTable = Client.GetSyncTable<ShopItem>();             memberTable = Client.GetTable<Member>();          }          public static AzureService defaultManager         {             get { return defaultInstance; }             set { defaultInstance = value; }         }          public MobileServiceClient CurrentClient         {             get { return Client; }         }  public async Task<IEnumerable<ContraceptionCenter>> GetContraceptionCenters()         {             try             {                 await this.SyncContraceptionCenters();                 return await contraceptionCenterTable.ToEnumerableAsync();             }             catch (MobileServiceInvalidOperationException msioe)             {                 Debug.WriteLine(@\""Invalid sync operation: {0}\""  msioe.Message);             }             catch (Exception e)             {                 Debug.WriteLine(@\""Sync error: {0}\""  e.Message);             }             return null;            } public async Task SyncContraceptionCenters()         {              ReadOnlyCollection<MobileServiceTableOperationError> syncErrors = null;              try             {                 //await this.Client.SyncContext.PushAsync();                  await this.contraceptionCenterTable.PullAsync(                     //The first parameter is a query name that is used internally by the client SDK to implement incremental sync.                     //Use a different query name for each unique query in your program                     \""allContraceptionCenters\""                      this.contraceptionCenterTable.CreateQuery());             }             catch (MobileServicePushFailedException exc)             {                 if (exc.PushResult != null)                 {                     syncErrors = exc.PushResult.Errors;                 }             }              // Simple error/conflict handling. A real application would handle the various errors like network conditions              // server conflicts and others via the IMobileServiceSyncHandler.             if (syncErrors != null)             {                 foreach (var error in syncErrors)                 {                     if (error.OperationKind == MobileServiceTableOperationKind.Update && error.Result != null)                     {                         //Update failed  reverting to server's copy.                         await error.CancelAndUpdateItemAsync(error.Result);                     }                     else                     {                         // Discard local change.                         await error.CancelAndDiscardItemAsync();                     }                      Debug.WriteLine(@\""Error executing sync operation. Item: {0} ({1}). Operation discarded.\""  error.TableName  error.Item[\""id\""]);                 }             }         } 

I am getting this error: System.NullReferenceException: Object reference not set to an instance of an object. When the SyncContraceptionCenters() is run. As far as I can tell I reproduced the coffeeItems example in my service But I am stuck.

""",azure-storage +56776185,"Merging coverage results from multiple Azure Pipeline jobs in Python (with pytest)

I've setup my open source project to run CI with Azure Pipelines and am collecting code coverage following the example from the Azure pipelines docs on how to test Python apps.

This seems to work pretty well but the code coverage statistics seem to only pick up a test results from a single job (at random). To get complete coverage for my project (e.g. for platform dependent code) I really need to aggregate coverage across all of the test jobs.

Here are the relevant tasks from my pipeline:

- bash: |     source activate test_env     pytest xarray --junitxml=junit/test-results.xml \\     --cov=xarray --cov-config=ci/.coveragerc --cov-report=xml   displayName: Run tests - task: PublishCodeCoverageResults@1   inputs:     codeCoverageTool: Cobertura     summaryFileLocation: '$(System.DefaultWorkingDirectory)/**/coverage.xml'     reportDirectory: '$(System.DefaultWorkingDirectory)/**/htmlcov' 

What's the right way to configure Azure to show this information?

I've tried adding --cov-append into my pytest invocation but that doesn't seem to make a difference.

""",azure-devops +57291779,"Publish build artifact through build.cake instead of Azure Devops
  1. Is it possible to publish an build artifact to Azure Devops/TFS through build.cake script?
  2. Where should the responsibility for publishing the build artifact be configured when converting to cake scripts in the build.cake script or in the Azure DevOps pipeline?

To achieve versioning in our build and release pipelines we decided to move our (gitversion clean build tests ...) tasks to be handled by a cake script stored in each repository instead.

Is there a way to replace the publish build artifact(Azure DevOps) task with a task in the cake.build? I have searched the official documentation of both Azure and cake but can not seem to find a solution. The first task copying the build artifacts to a staging directory is possible however publishing the artifact - is where it gets complicated.

Currently a snippet of our build.cake.

Task(\""Copy-Bin\"")     .WithCriteria(!isLocalBuild)     .Does(() =>     {         Information($\""Creating directory {artifactStagingDir}/drop\"");         CreateDirectory($\""{artifactStagingDir}/drop\"");         Information($\""Copying all files from {solutionDir}/{moduleName}.ServiceHost/bin to {artifactStagingDir}/drop/bin\"");         CopyDirectory($\""{solutionDir}/{moduleName}.ServiceHost/bin\""  $\""{artifactStagingDir}/drop/bin\"");         // Now we should publish the artifact to TFS/Azure Devops     }); 

Solution

A snippet of an updated build.cake.

Task(\""Copy-And-Publish-Artifacts\"")     .WithCriteria(BuildSystem.IsRunningOnAzurePipelinesHosted)     .Does(() =>     {         Information($\""Creating directory {artifactStagingDir}/drop\"");         CreateDirectory($\""{artifactStagingDir}/drop\"");         Information($\""Copying all files from {solutionDir}/{moduleName}.ServiceHost/bin to {artifactStagingDir}/drop/bin\"");         CopyDirectory($\""{solutionDir}/{moduleName}.ServiceHost/bin\""  $\""{artifactStagingDir}/drop/bin\"");         Information($\""Uploading files from artifact directory: {artifactStagingDir}/drop to TFS\"");         TFBuild.Commands.UploadArtifactDirectory($\""{artifactStagingDir}/drop\"");     }); 
""",azure-devops +56762589,"Unable to load the dll libwkhtmltox using dinktopdf for a .net core2.2 azure functionapp

I have an Azure functionapp using .net Core 2.2 that is written for the converting the html text to pdf. I am using the DinkToPdf. When I run the function I get \Unable to load the libwkhtmltox.dll. I have tried the alternate solutions as mentioned in some of the posts but it still throws the same error.

I tried using Directory.GetCurrentDirectory and using path.combine

The code is below:

        [FunctionName(\""Function1\"")]         public static async Task<IActionResult> Run(             [HttpTrigger(AuthorizationLevel.Anonymous  \""get\""  \""post\""  Route = null)] HttpRequest req              ILogger log)         {             log.LogInformation(\""C# HTTP trigger function processed a request.\"");              CustomAssemblyLoadContext context = new CustomAssemblyLoadContext();             var architectureFolder = (IntPtr.Size == 8) ? \""64 bit\"" : \""32 bit\"";             context.LoadUnmanagedLibrary($@\""{Directory.GetCurrentDirectory()}\\Dependencies\\libwkhtmltox.dll\"");              var IocContainer = new SynchronizedConverter(new PdfTools());             string html = await new StreamReader(req.Body).ReadToEndAsync();             var globalSettings = new GlobalSettings             {                 ColorMode = ColorMode.Color                  Orientation = Orientation.Portrait                  PaperSize = PaperKind.A4                  Margins = new MarginSettings { Top = 10 }              };             var objectSettings = new ObjectSettings             {                 PagesCount = true                  WebSettings = { DefaultEncoding = \""utf-8\"" }                  HtmlContent = html             };              var pdf = new HtmlToPdfDocument()             {                 GlobalSettings = globalSettings                  Objects = { objectSettings }             };              byte[] pdfBytes = null;// IocContainer.Convert(pdf);             return new FileContentResult(pdfBytes  \""application/pdf\"");         } } 
""",azure-functions +54604833,"Azure VM Scale Set: is it possible to pass bootstrap script/settings without downloading them from URL?

I can see that with Custom Script Extension it is possible to bootstrap new VMs (in Scale Set). To access a script it needs azure storage URI and credentials. This approach doesn't work for me because (internal policies) it's not allowed to pass storage credentials.

My VMSS has assigned service identity the latter is registered with KeyVault. So it is quite straightforward to get credentials directly on a box. But for this I need at least small bootstrap script =)

I found one hacky way how to achieve this through Custom Script Extension:

$bootstrapScriptPath = Join-Path -Path $PSScriptRoot -ChildPath \bootstrap.ps1\"" $bootstrapScriptBlock = get-command $bootstrapScriptPath | Select -ExpandProperty ScriptBlock $installScriptBase64 = [System.Convert]::ToBase64String([System.Text.Encoding]::Unicode.GetBytes($bootstrapScriptBlock.ToString()))  \""commandToExecute\"": \""[concat('powershell -ExecutionPolicy Unrestricted -EncodedCommand '  parameters('installScriptBase64'))]\"" 

But I wonder whether there are better solutions.

Essentially I need something which Cloud Service provides - ability to upload payload and config settings.

SOLUTION

(note this is for Windows VM. For Linux VM there is an easier way - thanks to @sendmarsh)

Please see below for actual implementation (note I marked as answer a post from @4c74356b41 who suggested this idea).

""",azure-virtual-machine +56490215,Is there any Rest Api which can Create Deployment Center at Azure portal in Web App?

I am trying to create Resources group WebApp plan service Webapp through the Rest Api of microsoft azure I have created above services from RestApi of microsoft Azure but i am unable to find How to create Deployment center.

not getting any thing.

,azure-web-app-service +54761959,"Deploy WAR to ROOT folder using MSDeploy ARM Template

I am using ARM Template to create a Azure App Service with Tomcat container and Deploy application as .war file. Here is my template.json with MSDeploy extension

{      \apiVersion\"": \""2014-06-01\""       \""name\"": \""MSDeploy\""       \""type\"": \""Extensions\""       \""dependsOn\"": [       \""[concat('Microsoft.Web/Sites/'  parameters('appservice_name'))]\""         \""[concat('Microsoft.Web/Sites/'  parameters('appservice_name')  '/config/web')]\""      ]       \""properties\"": {        \""packageUri\"": \""<StorageUrl>/Application.war\""      }   } 

It was working fine and deploying my war file to wwwroot folder but I need to deploy into wwwroot/ROOT folder.

Is there any way to specify deployment path in MSDeploy ARM Template?

""",azure-web-app-service +50335655,Azure Web App getting time out of 4 minuets idle

I have hosted ASP.NET MVC application in Azure Web APP its getting time-out of 4 Minutes. The session is clearing for that instance.

Now I'm using Free Shared Infrastructure pricing tire if I changing the plan to extend this idle timeout? or did I missed any configuration to setup session time-out.

,azure-web-app-service +50480462,Can Azure Backup server be installed on same server as workload?

Is it possible to install azure backup server on same server as workload or does it require separate server?

And does it support backup of SQL 2014 Express edition?

,azure-virtual-machine +47883799,"Azure Auto scaling Dependency on virtual scale set and its limitation

In Azure why there is not auto scaling for invidual virtual machine. Auto scaling is done through virtual scale set which supports hand full of operating system images. Is there is any limitation of VSS vs virtual machines. Is there is any way to do auto scaling for virtual machine other than using this https://blogs.msdn.microsoft.com/kaevans/2015/02/20/autoscaling-azurevirtual-machines/

I think azure monitor can be configured for to autoscale a VM but I could not figure out how to do so?

""",azure-virtual-machine +55735005,"Azure HTTPtrigger function not writing to Azure Storage Queue

I am expecting the below code to take a JSON body from func.HttpRequest write that message to an Azure Storage Queue and then return a success message to the caller. This works except that my Storage Queue is blank.

import logging  import azure.functions as func   def main(req: func.HttpRequest           orders: func.Out[func.QueueMessage]) -> func.HttpResponse:       logging.info('Python HTTP trigger function processed a request.')     message = req.get_json()     logging.info(message)     orders.set(message)     return func.HttpResponse(         body=”success”          status_code=200     ) 

Function.json

{   \scriptFile\"": \""__init__.py\""    \""bindings\"": [     {       \""authLevel\"": \""anonymous\""        \""type\"": \""httpTrigger\""        \""direction\"": \""in\""        \""name\"": \""req\""        \""methods\"": [         \""get\""          \""post\""       ]     }      {       \""type\"": \""http\""        \""direction\"": \""out\""        \""name\"": \""$return\""     }    {     \""type\"": \""queue\""      \""direction\"": \""out\""      \""name\"": \""orders\""      \""queueName\"": \""preprocess\""      \""connection\"": \""orders_STORAGE\""   }   ] } 

Local.settings.json

{   \""IsEncrypted\"": false    \""Values\"": {     \""FUNCTIONS_ER_RUNTIME\"": \""python\""      \""AzureWebJobsStorage\"": \""AzureWebJobsStorage\""      \""orders_STORAGE\"": \""DefaultEndpointsProtocol=https;AccountName=orders;AccountKey=*****;EndpointSuffix=core.windows.net\""   } } 

Terminal output:

… [4/17/2019 5:54:39 PM] Executing 'Functions.QueueTrigger' (Reason='New queue message detected on 'preprocess'.' Id=f27fd7d1-1ace-****-****-00fb021c9ca4)

[4/17/2019 5:54:39 PM] Trigger Details: MessageId: d28f96c5-****-****-9191-93f96a4423de DequeueCount: 1 InsertionTime: 4/17/2019 5:54:35 PM +00:00

[4/17/2019 5:54:39 PM] INFO: Received FunctionInvocationRequest request ID: 5bf59a45-****-****-9705-173d9635ca94 function ID: fa626dc9-****-****-a59b-6a48f08d87e1 invocation ID: f27fd7d1-1ace-****-****-00fb021c9ca4

[4/17/2019 5:54:39 PM] Python queue trigger function processed a queue item: name2

[4/17/2019 5:54:39 PM] INFO: Successfully processed FunctionInvocationRequest request ID: 5bf59a45-****-****-9705-173d9635ca94 function ID: fa626dc9-3313-****-****6a48f08d87e1 invocation ID: f27fd7d1-1ace-****-****-00fb021c9ca4

[4/17/2019 5:54:39 PM] Executed 'Functions.QueueTrigger' (Succeeded Id=f27fd7d1-1ace-****-****-00fb021c9ca4)

INFO: Successfully processed

– makes me think this worked and I should see a message in my queue but it is blank.

Why am I not seeing the message in the queue?

Thanks

""",azure-functions +21550850,"Azure CreateVM Deployment No target URI is specified for the image

I am trying to creating vm using the azure service management restapi. It is working fine if I choose my own images which i have created against my account. When I choose other images which are publicly available i am getting the following error.

No target URI is specified for the image fb83b3509582419d99629ce476bcb5c8__SQL-Server-2014CTP2-CU1-12.0.1736.0-Evaluation-ENU-WS2012R2-CY13SU12. 

With the following the xml I have to create vm:

<Deployment xmlns=\http://schemas.microsoft.com/windowsazure\"" xmlns:i=\""http://www.w3.org/2001/XMLSchema-instance\"">     <Name>azure4569033333</Name>     <DeploymentSlot>Production</DeploymentSlot>     <Label>YXp1cmU0NTY5MDMzMzMz</Label>     <RoleList>         <Role>             <RoleName>azure4569033333</RoleName>             <RoleType>PersistentVMRole</RoleType>             <ConfigurationSets>                 <ConfigurationSet i:type=\""WindowsProvisioningConfigurationSet\"">                     <ConfigurationSetType>WindowsProvisioningConfiguration</ConfigurationSetType>                     <ComputerName>azure4569033333</ComputerName>                     <AdminPassword>Pass!admin123</AdminPassword>                     <AdminUsername>admin12</AdminUsername>                 </ConfigurationSet>                 <ConfigurationSet i:type=\""NetworkConfigurationSet\"">                     <ConfigurationSetType>NetworkConfiguration</ConfigurationSetType>                     <InputEndpoints>                         <InputEndpoint>                             <LocalPort>3389</LocalPort>                             <Name>Remote Desktop</Name>                             <Port>3389</Port>                             <Protocol>tcp</Protocol>                         </InputEndpoint>                     </InputEndpoints>                 </ConfigurationSet>             </ConfigurationSets>             <OSVirtualHardDisk>                 <SourceImageName>fb83b3509582419d99629ce476bcb5c8__SQL-Server-2014CTP2-CU1-12.0.1736.0-Evaluation-ENU-WS2012R2-CY13SU12</SourceImageName>                 <MediaLink>https://portalvhdsd4lc8tzn260zd.blob.core.windows.net/vhds/fb83b3509582419d99629ce476bcb5c8__SQL-Server-2014CTP2-CU1-12.0.1736.0-Evaluation-ENU-WS2012R2-CY13SU12.vhd</MediaLink>             </OSVirtualHardDisk>             <RoleSize>ExtraSmall</RoleSize>         </Role>     </RoleList> </Deployment> 
""",azure-virtual-machine +55261361,"Actionresult not returning correct error code

I am implementing a custom business exception and returns the exception using BadRequestObjectResult.

exception throw

throw new BusinessException(Int32.Parse(AppConstants.ErrorCodes.DeviceNotFound)  \Device not found.\""); 

catch block

    catch (BusinessException ex)     {          return (ActionResult)new BadRequestObjectResult(ex);     } 

But it always returns an internal server error (code 500)

The value in the object at the catch block shows the correct error code which is passed from the throw part.

\""Values

""",azure-functions +57624103,"When user logs into website using ASP.NET Core Identity he is logged-in in all browsers accessing the website

I have an ASP.NET Core 2.2 application running in a B1 instance in Azure App Services. If I log into the website and open it on another machine I am logged in on that machine too including access to all pages protected by Authorization. When I log out on the second machine I'm not automatically logged back in until I clear the browser cache and restart the browser.

A similar issue was described here but was never really answered: ASP.NET Core identity shared across browser

This behavior seems to be somehow related to be running in an Azure App Service (Linux). I had the site running in a Docker image on a normal Linux VM (Ubuntu 18.04 official MS Docker image) before and did not encounter this problem.

Here is all code from Startup.cs that could be relevant:

public void ConfigureServices(IServiceCollection services)         { [...]  services.Configure<CookiePolicyOptions>(                 options =>                 {                     // This lambda determines whether user consent for non-essential cookies is needed for a given request.                     options.CheckConsentNeeded = _ => false;                     options.MinimumSameSitePolicy = SameSiteMode.None;                 }); [...] services.AddIdentity<User  IdentityRole>()                 .AddErrorDescriber<TopikonIdentityErrorDescriber>()                 .AddEntityFrameworkStores<TopikonContext>()                 .AddDefaultTokenProviders();             services.AddAuthentication();             services.AddAuthorization(                 options =>                 {                     options.AddPolicy(                         TopikonPolicies.ControlPanel                          policy => policy                             .RequireRole(TopikonRoles.ControlCenterAccess));                                     });  services        .AddMvc()                                 .AddRazorPagesOptions(                     options =>                     {                         options.AllowAreas = true;                         options.Conventions.AuthorizeFolder(\""/\"");                                            }).SetCompatibilityVersion(CompatibilityVersion.Version_2_2);  [...] }  public static void Configure(IApplicationBuilder app  IHostingEnvironment env){ [...] app.UseCookiePolicy(); app.UseAuthentication(); app.UseSession(); app.UseMvc(); } 

App Service Authentication is switched on and set to \""Allow Anonymous\"". I tried switching it off but the result was the same. I'd like users to be logged in only on the machine they are using and not to provide their login to everyone visiting the site. Unfortunately I'm not quite sure where to look for answers.

""",azure-web-app-service +54770199,azure webapp node default start script

I want an npm script to run after deployment in my azure webapp.

I assumed that npm run start would automatically be executed. However this does not seem to be the case. I already tried to leave the start script emtpy and the server still works. I also tried to run some random node file in the start script and this does not get executed. So I assume that azure does not run npm run start by default but rather executes node index.js

How can I run npm scripts instead?

,azure-web-app-service +36112904,"Edit Work Item Templates in Visual Studio Teamservices?

Is it possible to edit the Work Item Templates (former TFS Online) in Visual Studio Team Services?

I haven't found anything yet during my research. I tried the Powertools unfortunately it says \access denied\"".

""",azure-devops +45428821,"setup azure machine with docker for play-framework application

I am new to Azure and trying to setup my dockerized play-framework 2.5.x application there following this post:

Azure Linux Docker Machine setup

However it always failed on the first command:

docker-machine create -d azure \\ --azure-subscription-id $sub \\ --azure-ssh-user azureuser \\ --azure-open-port 80 \\ myvm 

error:

Provisioning with ubuntu(systemd)... Installing Docker... Copying certs to the local machine directory... Copying certs to the remote machine... Setting Docker configuration on the remote daemon... Error creating machine: Error running provisioning: ssh command error: command : sudo systemctl -f start docker err     : exit status 1 output  : Job for docker.service failed because the control process exited  with error code. See \""systemctl status docker.service\"" and \""journalctl -xe\""  for details. 

There are a lot of posts in Microsoft's website for azure setup but everything is in pieces. I have followed through numerous of those and they just end up with new entries in my dashboard which just adds to my bill but I still can not have access to any of those to deploy my app. This is really frustating as I have spent days and almost runs out of my free $200 credits.

Any help or any link to a WORKING setup is very highly appreciated

""",azure-virtual-machine +35447513,How can I run 5 azure micro-service instance in same time

I have azure micro-service call Service A. This micro service always looking queue call Queue A. Then I want to run 5 instance of Service A in same time. Is it possible?

if it is possible then what happens when I'm inserting 5 items to queue at same time ?

,azure-web-app-service +48514044,"Azure Function Authorization

I'm converting a WebApi to an Azure Function app. It authenticates using a SecurityToken in the header. With the api I put in an attribute to call the authentication logic but this doesn't work in Azure functions.

[ApiAuthentication()]     [FunctionName(\GetConfig\"")]     public static async Task<HttpResponseMessage> Run([HttpTrigger(AuthorizationLevel.Function  \""get\"")]HttpRequestMessage req  TraceWriter log)     { 

}

Is there a way to make this work or is there a better way?

""",azure-functions +57450594,VS2019 - Durable function (v1.8.3) and non-durable Function in a single function app - is supported?

I have a VS 2019 solution with two functions. One Durable Function with an Orchestration Client triggered by a BLOB trigger and its related activities. And another non-durable function triggered by a service bus event trigger.

The non-durable function triggered by a service bus event creates a blob that triggers the durable function.

I setup a taskhub for the Durable Function. Deployed both the functions onto Azure on cloud.

For some reason the functions work fine and get triggered the first time. The orchestrator does its job. The second time on the orchestration client gets triggered. But the orchestrator function is not getting triggered.

Looking for the durable function related tables in the storage explorer I do not see them - they don't seem to be getting created.

Using 1.8.3 version of Durable Task extensions.

Was wondering if I can have both durable and non-durable functions within the same function app? Was about to try moving them into separate function apps and VS 2017 solutions.

Anyone aware of any such limitations of Durable functions co-existing with non-durable functions within the same function app?

Regards -Athadu

,azure-functions +44305888,"Ajax call 500 server error for Django - Azure

I have an azure deployed django web app in which it shows instagram embedded pages one by one when I click next.

Issue:-

When I switch on the Azure Active Directory authentication ON it loads the first instagram embedded link but on click on next button which should load the next embedded page it shows a http 500 error message on console and doesn't load the page.

And if I switch off AD authentication and set to anonymous access the pages load without any issue on the next click.

What is the issue and how can be able to load them with my authentication settings on?

EDIT

I had come to a conclusion before that instagram pages were the problem behind the pages not able to load but it seems that's not the issue.

The problem is with the URL I am calling while clicking the next button. With authentication the app doesn't direct to next page but without any authentication it successfully goes to next page.

Refer the code below where I am making the call:-

$(\#button-next\"").click(function(e){     var fin = '';     $.ajax({         url: 'mywebsitename.azurewebsites.net/next_details/'          method: 'GET'          data: {'link': window.location.href}          success: function(response) {             if (response.data != 'no links') {               $.each(response['data']  function(index  value){                 fin += \""//pass\""              });              $('append-iframes').html(fin);              window.location.href = '*' + response.name + '/?type=' + response.check_type;             }         }         })         }) 

EDIT - 2

//Next_details function

@csrf_exempt https://myappname.azurewebsites.net def next_details(request):     import pdb; pdb.set_trace()     check_type = ''.join(findall('type=(.*)'  request.POST.get('link'  '')))     data_list = []     if check_type:         present_link = request.POST.get('link'  '').split('/')[-2]         if check_type == 'all':             data = urlopen('https://myappname.azurewebsites.net/get_detailed/').read()         else:             data = urlopen('https://myappname.azurewebsites.net/get_detailed_%s/' % check_type).read()     else:         check_type= 'all'         present_link = request.POST.get('link'  '').split('/')[-1]         data = urlopen('https://myappname.azurewebsites.net/get_detailed/').read()     data_json = json.loads(data)     qual_data = [dt['id'] for dt in data_json['qual_data'] if dt['name'] == present_link]     if qual_data:         present_id = qual_data[0]         next_acc = [dt['name'] for dt in data_json['qual_data'] if dt['id'] == present_id + 1]         acc_name = next_acc[0]         links = Radarly.objects.filter(screen_name=acc_name  status='Active')         if links:             for link in links:                 data_list.append({'perma_link': link.permalink  'name': acc_name})             data = json.dumps({'data': data_list  'check_type': check_type})             return HttpResponse(data  content_type =\""application/json\"")         else:             new_data = json.dumps({'data': 'no links'})             return HttpResponse(new_data  content_type =\""application/json\"") 

Also below is a screenshot of the error which shows on the DOM. \""enter

""",azure-web-app-service +34797792,"InvalidApiVersionParameter-When I am trying to create VM on Azure

When I am trying to create VM on Azure by Azure Java SDK I got the error message shown below.

InvalidApiVersionParameter: The api-version '2015-06-15' is invalid. The supported versions are '2015-11-01 2015-01-01 2014-04-01-preview 2014-04-01 2014-01-01 2013-03-01 2014-02-26 2014-04'.

It seems my api-version is incorrect. However I didn't set this api-version in the code. How to fix this error? Thanks.

""",azure-virtual-machine +54442936,Why do we get HttpAntiForgeryException after publishing update to Azure Web app service?

We published an application update to the same Azure web app service and started getting errors:

Exception: System.Web.Mvc.HttpAntiForgeryException (0x80004005): The anti-forgery token could not be decrypted. If this application is hosted by a Web Farm or cluster ensure that all machines are running the same version of ASP.NET Web Pages and that the configuration specifies explicit encryption and validation keys. AutoGenerate cannot be used in a cluster.

This happens to clients using a login page within the app. These are the response headers:

HTTP/1.1 200 OK Cache-Control: private Content-Length: 5585 Content-Type: text/html; charset=utf-8 Vary: Accept-Encoding Server: Microsoft-IIS/10.0 X-AspNetMvc-Version: 5.2 X-Frame-Options: SAMEORIGIN X-AspNet-Version: 4.0.30319 X-Powered-By: ASP.NET Date: Wed  30 Jan 2019 14:23:33 GMT 

The client has to either close the browser and reopen or clear the browser's cookies to fix the problem.

The web app was running 3 app service instances before and after the upgrade. ARR Affinity is on.

Why is this happening and how do I fix it?

,azure-web-app-service +41665397,"Cannot access my scm url

I'm working through the App Service getting started tutorial with Azure CLI 2.0 Preview but at the stage we do a ``git push``` the host is unresolved. Indeed when I ping it I get no response even if I add port 443 - which I'm not sure windows ping accepts.

When I log into the portal I see everything appears to be set up. At least the git clone URL is identical to what I have added as a git remote. Namely

https://my@email.com.com@my-cli-app.scm.azurewebsites.net:443/my-cli-app.git

I use the UK West centre which is the closest to me. Anyway I can't imagine there are DNS propagation delays to worry about

Have I missed something obvious?

thanks

PS one thing that I don't believe can be related is I have a number of subscriptions but as they are not specified in the given commands it picked a random one it Happens to be a MSDN subscription which has expired but works with Azure.

""",azure-web-app-service +47033544,"Azure KeyVault with Key Rotation

Our application doesn't use keyvault until now. We are thinking of using Azure KeyVault to enforce security for keys secrets and certificates. I read microsoft documentation on this Link. It's not clear that Azure KeyVault works with identity providers other than Azure AD. Because we are not using Azure AD but we are using Azure app service and storage account. we also want to implement key rotation with 1 hour expiry.

My questions are

  1. Should the web app be registered with Azure AD to use KeyVault ?

  2. While creating an azure keyvault i didn't see any option about key rotation. Am i looking in the wrong place?

  3. Any sample code would be helpful.

""",azure-storage +23771981,TFS Renaming and deleting files is slow within Visual Studio 2013

When within Visual Studio 2013 I rename a file that is bound to TFS Visual Studio pauses for around six seconds. When I'm refactoring for example this wait is really annoying because it interrupts my flow.

I suspect that when I rename a file it is contacting TFS and doing the rename on the server which is the reason for the pause and my wait (edit - I don't think this is the case because it takes exactly 6 seconds when I don't have internet connectivity). If this is the reason is there anyway to tell VS not to contact TFS until I check in? If it is not the reason for the slowing down of VS while I rename does anyone have any solutions to quicken up this process?

Edit - further information Visual Studio 2013 with update 2 and the free online version of TFS. The pause occurs with or without internet access. My machine is fairly fast (i5-2520M processor) with a SSD but it is 32 bit with 3gb of ram. I don't have many problems with memory though due to the SSD. In terms of add-ins I haven't installed any other than the default (I only recently upgraded to VS 2013)

,azure-devops +35776501,"VSTS - Backlog Priority altered by sorting in columns

I am attached to a project using VSTS with the Scrum process selected. As the Product Owner sorts the backlog things work fine with respect to the Backlog Priority value under the hood.

As tasks move along the Board from column to column I have noticed that the ordering of cards within a column will impact the Backlog Priority. This seems contrary to good sense.

Is there a justification why a developer's move of a card within a column such as \In QA\"" would result in that item being ranked above the other cards in the backlog?

I think it would be better if the sorting/ranking only worked on the Backlog itself. Once an item is underway in the columns moving it up and down as developers tend to do should not disrupt its position in the backlog.

""",azure-devops +38055705,"How to include Gulp generated dist folder in ProjectName.zip package created by the Visual Studio Build step

I have a build definition with the following build steps in the following order:

  1. NuGet Installer
  2. Npm
  3. Gulp
  4. Visual Studio Build
  5. Azure Web App Deployment

In step 3 Gulp generates a dist folder in my application root. The dist folder contains some subfolders and files. The subfolders themselves can contain other subfolders and files.

In step 4 the Visual Studio Build step creates a <ProjectName>.zip zip file when completed.

The Azure Web App Deployment step then deploys that zip file to my Azure Web App.

How do I include the dist folder in <ProjectName>.zip?

I tried doing two Azure Web App Deployments deploying one zip file at a time in the same build definition but the second Azure Web App Deployment build step wipes whatever was deployed by the first one. Is there a way to tell the second Azure Web App Deployment step to \append\"" to whatever was deployed by the first Azure Web App Deployment step?

""",azure-devops +57675488,Deploying an Azure Function from VS Code - Succesfull but not visible in the Portal

I created a function and I am trying to deploy it from VS Code by clicking the Deploy to Function App.... The Deployment runs successfully based on the output log - Deployment successful but then when I go to the portal the function is not listed under Functions.

What shall I do and what is the problem here?

When I debug in VS Code I get this: No job functions found. Try making your job classes and methods public. If you're using binding extensions (e.g. Azure Storage ServiceBus Timers etc.) make sure you've called the registration method for the extension(s) in your startup code (e.g. builder.AddAzureStorage() builder.AddServiceBus() builder.AddTimers() etc.).

,azure-functions +38571852,Synology NAS Backup to Azure Blob Storage - used space and billing

I am using Azure Blob Storage to backup Synology NAS.

My used space on NAS is 810 GB but my used space in Azure Blob Storage is 2642GB. I am using STANDARD IO - BLOCK BLOB (GB) - LOCALLY REDUNDANT.

Does someone know why there is such a big difference in stored data?

Thanks A

,azure-storage +53480232,"How to configure AllowedQueryOptions on an Azure Mobile App?

I am attempting to get the total item count from an Azure Mobile App. The mobile app is a .Net implementation that is using Table Storage. It is based on the Mobile App QuickStart code and it's only got the minimum of changes needed to make it work with Table Storage. All NuGet packages have been updated to the latest versions.

Here's the client code (Xamarin-Android):

    client = new MobileServiceClient(applicationURL  new LoggingHandler(true));     todoTable = client.GetTable<ToDoItem>();     List<ToDoItem> list = await todoTable.Take(0).IncludeTotalCount().ToListAsync(); 

This gives me the response

The query specified in the URI is not valid: 'Query option 'InlineCount' is not allowed. To allow it set the 'AllowedQueryOptions' property on EnableQueryAttribute or QueryValidationSettings.'.

What I've tried

    [EnableQuery(AllowedQueryOptions= AllowedQueryOptions.All]     public Task<IEnumerable<TodoItem>> GetAllTodoItems(ODataQueryOptions options)     {             return DomainManager.QueryAsync(options);     } 

This still gives me the same error back. I have tried different combinations of AllowedQueryOptions as well such as [EnableQuery(AllowedQueryOptions= AllowedQueryOptions.Filter | AllowedQueryOptions.Top | AllowedQueryOptions.Select | AllowedQueryOptions.InlineCount)] but the result is always the same.

Then I tried to add the option in ConfigureMobileApp()

    config.Filters.Add(new EnableQueryAttribute()     {         AllowedQueryOptions = AllowedQueryOptions.All     }); 

The code compiles and publish appears to succeed but the web site that is shown after publishing the code displayed

    <?xml version=\1.0\"" encoding=\""ISO-8859-1\""?>     <Error>             <Message>An error has occurred.</Message>     </Error>         

The client is still returning the same error message. I've also tried to combine this with [EnableQuery] but still no luck.

The DomainManager is initialized using the code

    DomainManager = new StorageDomainManager<TodoItem>(connectionStringName  tableName  Request); 

I tried to modify this to use the constructor that takes ValidationSettings and ODataQuerySettings objects.

    DomainManager = new StorageDomainManager<TodoItem>(connectionStringName  tableName  Request  GetValidationSettings()  GetQuerySettings()); 

My implementation of GetValidationSettings is identical to GetDefaultValidationSettings on https://github.com/Azure/azure-mobile-apps-net-server/blob/master/src/Microsoft.Azure.Mobile.Server.Storage/StorageDomainManager.cs

except I added inlinecount to the AllowedQueryOptions

    AllowedQueryOptions = AllowedQueryOptions.Filter                 | AllowedQueryOptions.Top                 | AllowedQueryOptions.Select                 | AllowedQueryOptions.InlineCount  

I didn't get the same error back anymore as this made QueryAsync fail with internal server error instead.

Anyone know how to configure AllowedQueryOptions so that I can use IncludeTotalCount() from the client?

""",azure-storage +8889823,What is the best way to monitor a container in Azure blob storage for changes?

I am looking for the best way to monitor a container/folder in Azure blob storage for changes. So far I have only found one way to do this which is to run a worker process somewhere that pings the container's contents on a regular basis to look for changes.

Is there a better way?

,azure-storage +12250634,installing oracle in windows azure under VM

I am thinking of using windows azure for my next project. I am forced to use oracle as the db back end. I would like to know whether there is any possibility of installing oracle on a windows azure virtual machine?

,azure-virtual-machine +54446426,"Azure website deployment to machine that already has a \Default web site\""

Am am trying to deploy a website via Azure Devops to an IIS server that has the reconfigured \""Default Web Site\"" started with a binding on port 80.

I want my website to run on port 80.

I am using the \""IIS Web App Manage\"" task. When I run my deployment on this machine I get an error:

[error]Binding (http / * : 80 : ) already exists for a different website (\""site \""default web site\"" (id:1 bindings:http/*:80: state:stopped)\"") change the port and retry the operation.

I have stopped the default web site but I still get the same error because the binding already exists.

I have tried using the IIS Web App Manage task to remove the binding on the Default web site but there does not appear to be a way to do this. I do not see another task that will perform this task.

I am trying to automate this for future deployment via Azure Devops so I do not have to change the bindings or remove the default website by hand.

""",azure-devops +56390151,"how to call from an azure web app a process running as a continuous azure webjob

I want to run the OPA (https://www.openpolicyagent.org/) agent as a background process besides my webapp with my webapp calling the OPA agent through http://localhost:8181.

I can start the OPA agent (with opa.exe run -s) which exposes an API on localhost:8181 through a continuous webjob in the azure web app. However my web app cannot connect to the OPA agent.

Is it possible for a web app process to do RPC (REST API) with a webjob process ?

""",azure-web-app-service +53100139,Point root domain to Azure Function

In our project requirements we need to deal with a number of redirects where requests come in the form of example.com/some-uri and based on this some-uri get redirected to various places.

As all of our existing apps are already hosted in Azure (hundreds of services databases applications etc etc.) Azure Functions seem a very reasonable choice. The workload is extremely simple: match the incoming uri against a table and issue a 301 redirect to the corresponding target.

Unfortunately with Azure Function there is no public IP address and I cannot use CNAME on the root domain (that is I cannot use DNS syntax @ CNAME somefunction.something.azure.net)

I don't want to have to pay for an app service to just deal with these redirects. The number of requests I expect is well within the free allocation of function invocations therefore I would be getting this essentially for free.

How can I point the root of my example.com domain to the function?

,azure-functions +55848047,"Authentication with Azure Active Directory on App Service for multi-container app (docker-compose)

I am trying to enable in-build authentication for app service with Azure Active Directory.

It is working fine when I use single container configuration but when I try to configure docker-compose then the redirection to microsoft login page is not in place.

I am using nginx (reverse-proxy image) for serving the static websites (two images) and one api image (server image).

App service in-build authentication is enabled and Active Directory is configured and action to take when request is not authenticated is set to Log in with Azure Active Directory. screenshot of auth setup

docker-compose.yml

version: '3'  services:   api:     image: plugotestcontainerregistry.azurecr.io/server:latest     environment:       PORT: 4000     expose:       - \""4000\""     restart: always   reverse-proxy:     image: plugotestcontainerregistry.azurecr.io/reverse-proxy:latest     ports:        - \""8080:8080\""     restart: always 

nginx config

server {   listen 80;   root   /usr/share/nginx/root;   location / {     index  index.html index.htm;     try_files $uri $uri/ /index.html;   }   location /api {     proxy_pass  \""http://api:4000\"";     proxy_set_header  Host $http_host;     proxy_set_header  X-Forwarded-For $remote_addr;   }   location /admin {     alias   /usr/share/nginx/admin/;     index  index.html index.htm;     try_files $uri $uri/ /admin/index.html;   }   error_page   500 502 503 504  /50x.html;   location = /50x.html {     root   /usr/share/nginx/html;   } } 

Working authentication using AAD and docker compose in Web App for containers.

""",azure-web-app-service +35853840,AzureWebApp: Monitor Application Logs

I'm building an Azure web-app and if there are certain unexpected errors I want to be able to bubble it up in the Azure Dashboard / add alerts.

Any System.Diagnostics.Trace.TraceError() messages are logged to the ApplicationLog. Is there a way to add alert/monitoring-graphs for these in Azure Portal?

,azure-web-app-service +38370304,"NetStandard Library not getting dependencies on VSTS Build Agent

I currently have a Xamarin.Android project that references a .NET Standard 1.1 library that references AutoMapper 5.0.2.

When I try to build this through VSTS I get this error

C:\\Program Files (x86)\\MSBuild\\Xamarin\\Android\\Xamarin.Android.Common.targets(1316 2): Error : Exception while loading assemblies: System.IO.FileNotFoundException: Could not load assembly 'System.Collections.Specialized Version=4.0.1.0 Culture=neutral PublicKeyToken=b03f5f7f11d50a3a'. Perhaps it doesn't exist in the Mono for Android profile?

This solution builds perfectly fine on my local machine and runs in the Android Emulator.

Things I have tried (and none have worked)

  1. Installing the AutoMapper Nuget package directly against the Android Project.
  2. Installing System.Collections.Specialized in the Android project.
  3. Doing <CopyNuGetImplementations>true</CopyNuGetImplementations> in the Android Project.

Also just as a side note I have .NET Standard 1.1 Libraries all the way through my project yet I can see from the build log that its using .NET Standard 1.3. Not sure if this will make a difference as I am not sure how the build process manages these standards.

Copying file from \""C:\\Users\\buildguest.nuget\\packages\\AutoMapper\\5.0.2\\lib\\netstandard1.3\\AutoMapper.dll\"" to \""C:\\a\\1\\b/Release\\AutoMapper.dll\"".

Update 1

Just to add that I have tried using Nuget 3.4.4 and Nuget 3.5.0-beta2 in the build agent and while this solved other issues I was having it didn't resolve the current one I am experiencing.

Update 2

Here is my Android project.json

{   \""dependencies\"": {     \""Newtonsoft.Json\"": \""9.0.1\""   }    \""frameworks\"": {     \""MonoAndroid Version=v6.0\"": {}   }    \""runtimes\"": {     \""win\"": {}   } } 

Here is my Portable project.json

{   \""supports\"": {}    \""dependencies\"": {     \""AutoMapper\"": \""5.0.2\""      \""NETStandard.Library\"": \""1.6.0\""      \""Xamarin.Forms\"": \""2.3.0.107\""   }    \""frameworks\"": {     \""netstandard1.1\"": {       \""imports\"": \""portable-win+net45+wp8+win81+wpa8\""     }   } } 

Update 3: 18th July Just adding more test cases

  1. Did a brand new Xamarin Android project with packages.config existing nuget.exe. All works.
  2. Add AutoMapper reference builds and runs locally. Fails in VSTS Build Agent
  3. Updated Nuget.exe - still fails on build
  4. Update to project.json - still fails on build.

I can not get even a blank project with an AutoMapper 5.0.2 reference working in the Visual Studio Build step of VSTS. Always the same error as above.

""",azure-devops +51071947,Azure VM with VPN

This is more one for curiosity and learning.

I currently have an Azure VM (Windows 2016 and SQL 2017) which I just used for R&D. The RDP port is enabled - no big deal as there is nothing top secret there.

But just to learn more about Azure I wanted to create a VPN so I can connect via that. Googling has left me a tad confused as how to go about this gateways gateway subnet etc etc. I'm not sure if the articles I am reading are the right ones as whatever I try doesn't appear to work.

Does anyone know of any links that might help me start from scratch with VPN settings to connect?

Thanks in advance Ray

,azure-virtual-machine +5483301,Reduce File Size Upload for Azure

I Have a hosted service on Azure every time I want to put the package (cspkg & cscfg files) it always take so long. My cspkg file is 18 MB. Is there any better way to do the upload? my thought is to put the images style etc in Storage on Azure and pointing my web app image references (src) to the storage account so I don't have to include those files in cspkg or something like that. It is just too much if I only have to modify a single line of code and but have to wait almost 30 min for the upload to be completed.

thx

,azure-storage +42003440,The IP address of my Azure Windows VM changed without waning

A few days ago the IP address of our VB Windows Server changed from 40.x.x.x to 13.x.x.x on the Azure platform. We have many loggers in the field that connect to this IP address and now none of them can connect.

Can the IP change without any warning from Azure?

Also there is no support to be found. No number no online support... I mean This is not a problem I should be paying support for... besides.. support is more expensive than the VM.

,azure-virtual-machine +47410951,"Cannot deploy REST API with Python on Azure

I'm trying to deploy simple Azure web application. I create it exactly as described here

https://docs.microsoft.com/en-us/azure/app-service/app-service-web-get-started-python

but replaced code in main.py with following (and update requirements.txt of course):

from flask import Flask  request from flask_restful import Resource  Api   app = Flask(__name__) api = Api(app) todos = {}   class HelloWorld(Resource):     def get(self):         return {'hello': 'world'}   class TodoSimple(Resource):     def get(self  todo_id):         return {todo_id: todos[todo_id]}      def put(self  todo_id):         todos[todo_id] = request.form['data']         return {todo_id: todos[todo_id]}   api.add_resource(HelloWorld  '/') api.add_resource(TodoSimple  '/<string:todo_id>')   if __name__ == '__main__':     app.run(debug=True) 

Everything works fine locally but there are issues with deployed version:

-- http://my-app-name-here.azurewebsites.net is just fine and prints {'hello': 'world'} as expected

-- other commands provided by TodoSimple are not accessible.

For example following query

curl http://my-app-name-here.azurewebsites.net/todo -d \""data=Remember the milk\"" -X PUT 

would result with \""The resource you are looking for has been removed had its name changed or is temporarily unavailable.\"" response.

Update: it's all fine when ran locally

$curl http://localhost:5000/todo -d \""data=Remember the milk\"" -X PUT {     \""todo\"": \""Remember the milk\"" } 

Does anyone know what I'm missing with this app deployment?

Update2: approach without flask_restful won't work either:

from flask import Flask app = Flask(__name__)   @app.route('/') def hello_world():     return 'Hello  World!'   @app.route('/data') def get_data():     return 'The data.'   if __name__ == '__main__':   app.run() 

Calling http://my-app-name-here.azurewebsites.net/data results in \""The resource you are looking for has been removed had its name changed or is temporarily unavailable.\"" message again.

""",azure-web-app-service +11119614,Execute Methode in windows azure with multiple instances

I want to execute a job scheduler methods in my windows azure application. So in my application i am using 2 instance of the same application. So If i create a scheduler means both the instance can execute same code. Is it possible to avoid such execution? Or is it possible check other instace before executing the code? For the implementation i am using c#.Net.

,azure-storage +26739170,Azure: Cannot Configure a VM Size beyond A0-A4

I am attempting to increase the size of a Virtual Machine on my Azure subscription from an A2 (2 cores 3.5GB) machine to a D3 (4 cores 14GB) machine. The only options available for this particular VM on the configure tab > Virtual machine size are: - A0 - A1 - A2 - A3 - A4

I do not see an A5 or a D3 virtual machine size available - although these are available for other virtual machines within my subscription. We have had this and a couple of other VMs with the same issue running for about a year and a half - the newer VMs in our subscription (as well as machines in the create gallery) can all be scaled into the memory and CPU intensive versions (A5 or D3 D4).

Is there any pathway that will allow me to upgrade this older VM to a newer specification of Virtual Machine?

,azure-virtual-machine +38469845,How do I authenticate Azure Powershell on Azure VM

I'm wanting to execute a Powershell script from an Azure VM to get its current public IP address (and to write this address to an evironment variable for an application to use).

My question is what the best way to authenticate the Azure Powershell environment is? On AWS credentials get 'baked' into an instance when it gets created. Does the equivalent happen with Azure Virtual Machines?

,azure-virtual-machine +50567301,"Custom Redirect page is not working

I have the Custom Error <customErrors mode=\Off\"" defaultRedirect=\""Error.aspx\""/>.

Its totally not working tried <deployment retail=\""false\""/> No result.

Application in Azure App service so No role of IIS here. Please help

""",azure-web-app-service +38138379,How to migrate Github Private server to another server in Azure Migration

We have Private Github server in Ubuntu running in Azure subscription and requirement is to migrate Github to another private github server. Decided to migrate Azure from one Subscription to another subscription taking the image.

So What are the steps/process has to be done pre and post activities for Private Github server migration from Azure to Azure subscription?

,azure-virtual-machine +57621967,"How do I move Work Items from one organization to another

We have a number of Work Items in a project. Now there is another organization in our devops and we wish to move all our existing Work Items from the old organization (project) to the new one. How can this be done?

I've seen people discussing this before and some comments saying \""we use excel\"". But no information about how to actually do this.

""",azure-devops +50275661,"VSTS Dashboard Widget getWorkItem optional parameter expand

I am writing a VSTS dashboard widget used for Work Item Tracking

However I am running into a problem when using the getWorkItem() function. I want to get the ids of all the Features under a given Epic (I already know the epic ID). I am confident that if I set the expand paremeter of getWorkItem() to \""All\"" I will get a list of all the Features and their respective ids. Unfortunately I do not know how to define the \""type\"" of expand parameter and how to properly pass it as a value to the getWorkItem() function.

Here is my code:

VSS.require([\""VSS/Service\""  \""TFS/Dashboards/WidgetHelpers\""  \""TFS/WorkItemTracking/RestClient\""]          function (VSS_Service  WidgetHelpers  TFS_Wit_WebApi) {             WidgetHelpers.IncludeWidgetStyles();             VSS.register(\""myapp\""  function () {                 var fetchData = function (widgetSettings) {                     const epicID = 123456;                     // Get a WIT client to make REST calls to VSTS                     return VSS_Service.getCollectionClient(TFS_Wit_WebApi.WorkItemTrackingHttpClient).getWorkItem(123456  null  null  All).                         then(                             //Successful retrieval of workItems                             function (workItems) {                                 $('#myText').text(JSON.stringify(workItems));                                 console.log(workItems);                                 // Use the widget helper and return success as Widget Status                                 return WidgetHelpers.WidgetStatusHelper.Success();                             }                              function (error) {                                 // Use the widget helper and return failure as Widget Status                                 return WidgetHelpers.WidgetStatusHelper.Failure(error.message);                             });                 } 

Here is the VSTS reference for expand It explains what the values can be but doesn't say how to pass it into the getWorkItem() function.

I would like to set the optional expand parameter of the function to \""All\"" but don't know its type and how to properly define and use it.

""",azure-devops +22937220,"How to start Windows Azure Storage Emulator V3.0 from code

Since I installed the new Windows Azure SDK 2.3 I got a warning from csrun:

\DevStore interaction through CSRun has been depricated. Use WAStorageEmulator.exe instead.\""

So there are two questions: 1) How to start the new storage emulator correctly from code? 2) How to determine from code if the storage emulator is already running?

""",azure-storage +49195041,"how to analyze memory leaks for \azure web apps\"" (PaaS)

I am looking to analyze memory leaks for the web app deployed in azure.

Referring to following urls

we were able to extract memory dump and analyze them. but since we were not able to inject the LeakTrack dll / enable memory leaks tracking when collecting the dump we are getting message that leak analysis was not performed due to not injecting the dll on performing memory analysis.

please suggest how to find out memory leakages from analyzing the dump in this scenario.

""",azure-web-app-service +53391754,"Publish build artifact from folder without including root folder

On Azure DevOps I have some files I want to publish:

  • $(Build.ArtifactStagingDirectory)/dist/app/index.html
  • $(Build.ArtifactStagingDirectory)/dist/app/bundle.js

I want to publish them into an artifact app.zip which contains at the root level: - index.html - bundle.js

However when I use the \Publish Build Artifacts\"" tasks with the path set to $(Build.ArtifactStagingDirectory)/dist/app I get the following contents in app.zip:

  • app/
    • index.html
    • bundle.js

I tried setting the publish path to:

  • $(Build.ArtifactStagingDirectory)/dist/app/**
  • $(Build.ArtifactStagingDirectory)/dist/app/*
  • $(Build.ArtifactStagingDirectory)/dist/app/*.*

but all of these fail the build with the error Not found PathtoPublish

""",azure-devops +33800455,Is it safe to multithread host.RunAndBlock() in azure WebJobs

I need to handle a large amount of separated queues the queues need to be separated but handled in the same way I don’t want to setup multiple queue-functions so I thought of this solution but I’m not sure it’s a safe way to do:

var connectors = GetTheConnectors();          var tasks = new List<Task>();         foreach (var item in connectors)         {             var task = Task.Factory.StartNew(() => {                 var host = new JobHost(new JobHostConfiguration                 {                     NameResolver = new QueueNameResolver(item.Name)                 });                 host.RunAndBlock();             });             tasks.Add(task);         }          Task.WaitAll(tasks.ToArray()); 

If not Do anyone have a better solution?

,azure-storage +56747494,"How to programmatically using C# fetch or update or toggle or manipulate the auto-shutdown parameters for a selected azure VM in Azure portal?

I am trying to programmatically using C# fetch the details of Auto-shutdown parameters for a selected VM from azure portal. The things i want to achieve are given below:

  1. First get the auto shut down status it is enabled or disabled?
  2. If it is enabled then get auto shutdown time and its time zone related information
  3. Based on input update the timezone and time or disable the auto shutdown status on need basis

I want this to be done via C# program.

I am not knowing how to achieve it through the googling i have done. Please provide a detailed step by step guide how to achieve it as i am new to coding C# and AZURE

Please note that the VM's in our project are not created in any DevTest labs these are created through LCS directly and with DEMO env an option while creation.

Can you please provide details taking the above points into consideration? Or this is not possible as the step is not correct?

Please let me know if any other information is needed from my end to enable you provide me a solution.

I have already looked into below PowerShell script:

How to collect the Azure VM auto-shutdown time using PowerShell?

But this seems to be involving a VM created in DEV TEST lab which in my case will not work as our VM'S are not created in a separate lab has tried to explain above. Hence i think the script does not work

Tried to look into a few REST API but could not find anything there also.

""",azure-virtual-machine +11173260,What's the best workflow for an Azure virtual machine (Windows)?

I'm developing a Socket.IO application with a MongoDB database. For various reasons I am developing the application to run on a Windows virtual machine within Azure. Setting everything up was fairly painless and I now have a basic application within the cloud. However I am unable to find a comfortable workflow. I want to be able to push changes to the virtual machine (as if I was on *nix system using git) and I'm not sure how best to do this.

,azure-virtual-machine +55113103,"Azure CLI returns error on Storage access

I'm on a Linux machine trying to use the Azure CLI az to list storages

az storage blob list --container-name <name> --account-name <name> --account-key <key> 

when executing it returns the error

azure.common.AzureHttpError: One of the request inputs is out of range. ErrorCode: OutOfRangeInput <?xml version=\1.0\"" encoding=\""utf-8\""?><Error><Code>OutOfRangeInput</Code><Message>One of the request inputs is out of range. RequestId:bf2b4b1d-401e-0055-1678-d80520000000 Time:2019-03-12T02:08:42.4135303Z</Message></Error> 

I can't find any documentation that explains the error?

""",azure-storage +48351439,"Azure Functions fails to receive Azure Queue messages [.net Core 2.0]

When I updated my Azure Functions project to .net core 2.0 it started failing to trigger upon a message on my Queue.

TranslatorAPI.cs

public static class TranslatorAPI {     [FunctionName(\TranslatorAPI\"")]     public static void Run([QueueTrigger(\""Translator\"")]string mySbMsg  TraceWriter log)     {         //breakpoint here  but never hit         CallTranslator callTranslator = new CallTranslator();          //something     } } 

local.setting.json

{   \""IsEncrypted\"": false    \""Values\"": {     \""WEBSITE_SLOT_NAME\"": \""Production\""      \""FUNCTIONS_EXTENSION_VERSION\"": \""~1\""      \""ScmType\"": \""None\""      \""WEBSITE_AUTH_ENABLED\"": \""False\""      \""FUNCTION_APP_EDIT_MODE\"": \""readwrite\""      \""APPINSIGHTS_INSTRUMENTATIONKEY\"": \""<Key>\""      \""WEBSITE_NODE_DEFAULT_VERSION\"": \""6.5.0\""      \""WEBSITE_CONTENTAZUREFILECONNECTIONSTRING\"": \""DefaultEndpointsProtocol=https;AccountName=<AccountName>;AccountKey=<Key>\""      \""WEBSITE_CONTENTSHARE\"": \""<ShareName>\""      \""WEBSITE_SITE_NAME\"": \""<Functions'Name>\""      \""<ServiceBusInstanceName>_RootManageSharedAccessKey_SERVICEBUS\"": \""Endpoint=<ConnectionString>\""      \""AzureWebJobsStorage\"": \""DefaultEndpointsProtocol=https;AccountName=<AccountName>;AccountKey=<Key>\""      \""AzureWebJobsDashboard\"": \""DefaultEndpointsProtocol=https;AccountName=<AccountName>;AccountKey=<Key>;BlobEndpoint=<BlobURL>;TableEndpoint=<TableURL>;QueueEndpoint=<QueueURL>;FileEndpoint=<FileURL>\""   } } 

Workaround:

With the latest NuGet package \""Microsoft.Azure.WebJobs\"" 3.0.0-beta4 there is a known issue with handling an extension for ServiceBus connection and project build is to fail at establishing a path for the ServiceBus metadata. Updating these packages from the source Azure App Service:

Microsoft.Azure.WebJobs: 3.0.0-beta4-11131 Microsoft.Azure.WebJobs.Extensions: 3.0.0-beta4-10578 Microsoft.Azure.WebJobs.Script.ExtensionsMetadataGenerator: 1.0.0-beta3 Microsoft.Azure.WebJobs.ServisBus: 3.0.0-beta4-11131 

enabled the build succeeded. However it still cannot receive a message from my queue. Here's the log of function console.

[2018/01/20 0:22:38] Reading host configuration file '<ProjectPath>\\bin\\Debug\\netstandard2.0\\host.json' [2018/01/20 0:22:38] Host configuration file read: [2018/01/20 0:22:38] {} [2018/01/20 0:22:40] Loading custom extension 'HttpExtensionConfiguration' [2018/01/20 0:22:40] Unable to load custom extension type for extension 'HttpExtensionConfiguration' (Type: `Microsoft.Azure.WebJobs.Extensions.Http.HttpExtensionConfiguration  Microsoft.Azure.WebJobs.Extensions.Http  Version=3.0.0.0  Culture=neutral  PublicKeyToken=null`).The type does not exist or is not a valid extension. Please validate the type and assembly names. [2018/01/20 0:22:40] Loading custom extension 'ServiceBusExtensionConfig' [2018/01/20 0:22:40] Loaded custom extension: ServiceBusExtensionConfig from '<ProjectPath>\\bin\\Debug\\netstandard2.0\\bin\\Microsoft.Azure.WebJobs.ServiceBus.dll' [2018/01/20 0:22:41] Generating 1 job function(s) [2018/01/20 0:22:42] Starting Host (HostId=<PCName>-1149239943  Version=2.0.11415.0  ProcessId=12536  Debug=False  ConsecutiveErrors=0  StartupCount=1  FunctionsExtensionVersion=~1) [2018/01/20 0:22:44] Found the following functions: [2018/01/20 0:22:44] <Functions'Name>.TranslatorAPI.Run [2018/01/20 0:22:44] [2018/01/20 0:22:44] Job host started [2018/01/20 0:22:44] Host lock lease acquired by instance ID '<ID>'. Listening on http://localhost:7071/ Hit CTRL-C to exit... 

Although it fails to read an extenstion for http trigger type it succeeds to generate TranslatorAPI.Run.

So what would be a problem here?

EDIT

I assumed the method failed to get the connection string so I changed the method argument to

public static void Run([QueueTrigger(\""Translator\""  Connection = \""<ServiceBusInstanceName>_RootManageSharedAccessKey_SERVICEBUS\"")]string mySbMsg  TraceWriter log) 

Then the error changed into

[2018/01/20 1:32:57] Run: Microsoft.Azure.WebJobs.Host: Error indexing method 'TranslatorAPI.Run'. Microsoft.Azure.WebJobs.Host: Failed to validate Microsoft Azure WebJobs SDK <ServiceBusInstanceName>_RootManageSharedAccessKey_SERVICEBUS connection string. The Microsoft Azure Storage account connection string is not formatted correctly. Please visit https://go.microsoft.com/fwlink/?linkid=841340 for details about configuring Microsoft Azure Storage connection strings. 

Now it seems say my connection string for the ServiceBus is wrong based on the validation of connection string for Azure Storage Account. I'm not sure how to understand what's this meaning and to solve this issue.

""",azure-functions +51268507,How do I get a camera to work on an Azure Virtual Machine

I want to be able to use a camera on an Azure Virtual Machine using Windows 10.

Camera feed can either come through on local machine or a feed from another machine. Either way I get the below error: We can't find your camera NoCamerasAreAttached.

Even though I have enabled both through the connection and enable USB Redirection in Windows 10. incl gpedit.msc

,azure-virtual-machine +47399191,"How to create a counter inside a for loop for Iterable in Java and get the value of the counter variable

Here's my current code snippet:

    Iterable<HashMap<String  EntityProperty>> results =             cloudTable.execute(query  propertyResolver);      if (results == null) {         System.out.println(\No files processed\"");         exit_code = \""exit_code=1\"";     } else {         for (HashMap<String  EntityProperty> entity : results) {             // don't know how to start the loop here          }     } 

I have a query for retrieving a list of certain files in Microsoft Azure. Now I just need to show the number of files processed result.

I know the concept of what I should be doing create a counter within the for loop and then after the Iterations in that loop whatever the value of that counter variable it should also give me the count of files right? I just don't know how to start :( I've been reading so much about Iterable in Java but still can't get a grasp on how it would work.

Any inputs would be greatly appreciated

""",azure-storage +38710138,"How often can MS Azure App Services Outbound IP addresses change?

I'm using Azure App Services that calls an external API that uses white-listing of IP addresses for defense-in-depth protection.

I'm aware I can find my Outbound IP addresses of my App Services under the WebApp -> Settings -> Properties -> Outbound IP addresses (showing a list of 4 comma separated IP addresses) which can be supplied to the external API whitelist. I understand Microsoft publishes a regularly updated list of Azure datacenter's IP addresses for outbound traffic that I can whitelist: https://www.microsoft.com/en-us/download/details.aspx?id=41653

The issue is the external API can only handle a number of IP addresses and not the full list of Azure datacenter IP's. Would it be safe to just provide the 4 comma separated IP addresses? Is there clear Microsoft documentation on how often or when the IP address can dynamically change?

I have tried to look for the answer and found two external sites that suggested it only changes when moving Azure regions [Ref 2] or if you scale up/down (but scale out/in is apparently fine) [Ref 1]. Is this correct information?

Is the Azure App Services Environment the only other viable alternative in my situation?

""",azure-web-app-service +53266028,"Azure DevOps hosted ubuntu agent issue updating Application Gateway

I deployed some infra using Terraform including an application gateway. Unfortunately not all settings can be set/updated with terraform. SO I have a shell script that updates the application gateway.

#!/bin/bash SP_ID=${1} SP_SECRET=${2} TENANT_ID=${3} SUBSCRIPTION=${4} RG=${5}  az login --service-principal -u ${SP_ID} -p ${SP_SECRET} -t ${TENANT_ID} az account set --subscription ${SUBSCRIPTION} az account list -o table  # Get the name of the AG echo \RG = ${RG}\"" AG=$(az network application-gateway list --resource-group ${RG} | tail -n 1 | awk '{ print $2 }') echo \""AG = ${AG}\""  # Get the AG backend pool name BP=$(az network application-gateway address-pool list --resource-group ${RG} --gateway-name ${AG} | tail -n 1 | awk '{ print $1 }') echo \""Backend pool = ${BP}\""  # Get the frontendip of the load balancer LB=$(az network lb list --resource-group ${RG} | tail -n 1 | awk '{ print $2         }') LBFEIP=$(az network lb frontend-ip list --lb-name ${LB} --resource-group    ${RG} | tail -n 1 | awk '{ print $2 }') echo \""Load balancer = ${LB}\"" echo \""Frontend ip LB =  ${LBFEIP}\""  # Update the backend pool of the AG with the frontend ip of the loadbalancer echo \""Updating Backend address pool of AG ${AG}\"" az network application-gateway address-pool update --gateway-name $AG --resource-group $RG --name $BP --servers ${LBFEIP}  # Update http settings echo \""Updating HTTP settings of AG ${AG}\"" AG_HTS=$(az network application-gateway http-settings list --resource-group     ${RG} --gateway-name ${AG} | tail -n 1 | awk '{ print $2 }') az network application-gateway http-settings update --resource-group ${RG} --gateway-name ${AG} --name ${AG_HTS} --host-name-from-backend-pool true  # Update health probe echo \""Updating Health probe of AG ${AG}\"" AG_HP=$(az network application-gateway probe list --resource-group ${RG} --gateway-name ${AG} | tail -n 1 | awk '{ print $4 }') az network application-gateway probe update --resource-group ${RG} --gateway-name ${AG} --name ${AG_HP} --host '' --host-name-from-http-settings true 

This script works fine running locally from my laptop but via the azure devops release pipeline I get the error:

ERROR: az network application-gateway address-pool list: error: argument --gateway-name: expected one argument 

Somehow it cannot get the application gateway name when the script is running through the release pipeline. Again when running this script locally it works fine. Anyoone an idea of what I maybe missing here or can try?

I created the script on WSL Ubuntu and used a ubuntu hosted agent to publish the artifacts and also use a hosted ubuntu agent to deploy the script.

""",azure-devops +50122242,"what's the easiest way to upload a solution to a vsts repo?

What's the easiest way to upload a solution to a new VSTS repo? I've created a new repo in VSTS and I need to upload a solution to it. When I go to the repo in VSTS there's an \Upload file(s)\"" button which opens a \""Commit\"" dialog. The top panel of the Commit dialog has a \""Browse...\"" widget which says: \""Drag and drop files here or click browse to select a file\""

If I provide the path where my app exists then the dialog only lets me select top-level files but not folders. I do see a way to manually add folders via the VSTS UI. Is there a simpler way to add all the contents of my solution folder at once as opposed to manually piecing this together as described above?

""",azure-devops +19893720,Scalling Wordpress on Windows Azure:

I'm running a Wordpress multisite which in short periods every week experience a big number of users requiring more CPU + RAM.

I therefore wish to make use of Azure autoscale to turn on more instances if the demand are there however is it possible to make a setup where the different instances share same storage and database? And if yes how could it be done?

,azure-virtual-machine +44826846,"Team Services control options in Release task

I might be doing something wrong but I have created a Build and a Release in VSTS. For the release I need to execute a task only if one of the previous failed. This is easy for the tasks in the Build there is a dropdown with several options including \Only when a previous task has failed\"" however for the release I only have Always run and continue on error which doesn't work for me.

Is this because the control options for release doesn't support the same options as the build or is something missing for my VSTS?

""",azure-devops +50727626,"How to config dns of a virtual machine in Azure to point to another domain

I have an application that is installed on a virtual machine in Azure. It is accessible with the public ip and the dns name offered by Azure.

I have a domain name with ssl in namecheap. I want to point the dns name from Azure to it or another provider.

To illustrate even more:

My app is accessible to the outside world using: x.x.x.x/app/login

or with: mydnsname.azure.com/app/login

What I want is: anotherdomain.com/app/login

\""azure

I don't want to change my records inside namecheap i.e. changing CNAME record to point to my dns and A record to the public ip of my vm.

I know this method but in my situation it doesn't work.

\""namecheap

""",azure-virtual-machine +51965087,"spring boot azure storage connection string error

I use spring boot 2.0.2. when I run the application spring boot in cmd with the mvn spring-boot: run command running smoothly but when I export it to war and I run it on tomcat with the ROOT path I get the following error:

2018-08-22 17:21:17.312  INFO 10764 --- [ost-startStop-1] m.s.d.PriorityQueueEmailSchedulerService : Scheduled email Email{from=developerglob@gmail.com  to=[cisvapery@gmail.com]  subject=Glob report buy point firetime \\'2018-08-22T16:38+07:00\\' and priority 1  body=  attachments=[report_buy_point pripurna bandung.csv]  encoding=UTF-8} at UTC time [2018-08-22T16:38+07:00  1] with priority {} with template [WARN] AnnotationConfigServletWebServerApplicationContext - Exception encountered during context initialization - cancelling refresh attempt: org.springframework.beans.factory.UnsatisfiedDependencyException: Error creating bean with name 'storageAzureService': Unsatisfied dependency expressed through field 'cloudStorageAccount'; nested exception is org.springframework.beans.factory.UnsatisfiedDependencyException: Error creating bean with name 'com.microsoft.azure.spring.autoconfigure.storage.StorageAutoConfiguration': Unsatisfied dependency expressed through constructor parameter 0; nested exception is org.springframework.boot.context.properties.ConfigurationPropertiesBindException: Error creating bean with name 'azure.storage-com.microsoft.azure.spring.autoconfigure.storage.StorageProperties': Could not bind properties to 'StorageProperties' : prefix=azure.storage  ignoreInvalidFields=false  ignoreUnknownFields=true; nested exception is org.springframework.boot.context.properties.bind.BindException: Failed to bind properties under 'azure.storage' to com.microsoft.azure.spring.autoconfigure.storage.StorageProperties 2018-08-22 17:21:18.183  INFO 10764 --- [ost-startStop-1] m.s.d.PriorityQueueEmailSchedulerService : Closing EmailScheduler 2018-08-22 17:21:18.185  INFO 10764 --- [ost-startStop-1] m.s.d.PriorityQueueEmailSchedulerService : Interrupting email scheduler consumer Exception in thread \PriorityQueueEmailSchedulerService -- Consumer\"" org.springframework.mail.MailSendException: Mail server connection failed; nested exception is javax.mail.MessagingException: Could not convert socket to TLS;   nested exception is:         javax.net.ssl.SSLHandshakeException: sun.security.validator.ValidatorException: PKIX path building failed: sun.security.provider.certpath.SunCertPathBuilderException: unable to find valid certification path to requested target. Failed messages: javax.mail.MessagingException: Could not convert socket to TLS;   nested exception is:         javax.net.ssl.SSLHandshakeException: sun.security.validator.ValidatorException: PKIX path building failed: sun.security.provider.certpath.SunCertPathBuilderException: unable to find valid certification path to requested target; message exception details (1) are: Failed message 1: javax.mail.MessagingException: Could not convert socket to TLS;   nested exception is:         javax.net.ssl.SSLHandshakeException: sun.security.validator.ValidatorException: PKIX path building failed: sun.security.provider.certpath.SunCertPathBuilderException: unable to find valid certification path to requested target         at com.sun.mail.smtp.SMTPTransport.startTLS(SMTPTransport.java:2155)         at com.sun.mail.smtp.SMTPTransport.protocolConnect(SMTPTransport.java:752)         at javax.mail.Service.connect(Service.java:366)         at org.springframework.mail.javamail.JavaMailSenderImpl.connectTransport(JavaMailSenderImpl.java:515)         at org.springframework.mail.javamail.JavaMailSenderImpl.doSend(JavaMailSenderImpl.java:435)         at org.springframework.mail.javamail.JavaMailSenderImpl.send(JavaMailSenderImpl.java:359)         at org.springframework.mail.javamail.JavaMailSenderImpl.send(JavaMailSenderImpl.java:354)         at it.ozimov.springboot.mail.service.defaultimpl.DefaultEmailService.send(DefaultEmailService.java:138)         at it.ozimov.springboot.mail.service.defaultimpl.PriorityQueueEmailSchedulerService$Consumer.run(PriorityQueueEmailSchedulerService.java:443) Caused by: javax.net.ssl.SSLHandshakeException: sun.security.validator.ValidatorException: PKIX path building failed: sun.security.provider.certpath.SunCertPathBuilderException: unable to find valid certification path to requested target         at sun.security.ssl.Alerts.getSSLException(Alerts.java:192)         at sun.security.ssl.SSLSocketImpl.fatal(SSLSocketImpl.java:1959)         at sun.security.ssl.Handshaker.fatalSE(Handshaker.java:328)         at sun.security.ssl.Handshaker.fatalSE(Handshaker.java:322)         at sun.security.ssl.ClientHandshaker.serverCertificate(ClientHandshaker.java:1614)         at sun.security.ssl.ClientHandshaker.processMessage(ClientHandshaker.java:216)         at sun.security.ssl.Handshaker.processLoop(Handshaker.java:1052)         at sun.security.ssl.Handshaker.process_record(Handshaker.java:987)         at sun.security.ssl.SSLSocketImpl.readRecord(SSLSocketImpl.java:1072)         at sun.security.ssl.SSLSocketImpl.performInitialHandshake(SSLSocketImpl.java:1385)         at sun.security.ssl.SSLSocketImpl.startHandshake(SSLSocketImpl.java:1413)         at sun.security.ssl.SSLSocketImpl.startHandshake(SSLSocketImpl.java:1397)         at com.sun.mail.util.SocketFetcher.configureSSLSocket(SocketFetcher.java:620)         at com.sun.mail.util.SocketFetcher.startTLS(SocketFetcher.java:547)         at com.sun.mail.smtp.SMTPTransport.startTLS(SMTPTransport.java:2150)         ... 8 more Caused by: sun.security.validator.ValidatorException: PKIX path building failed: sun.security.provider.certpath.SunCertPathBuilderException: unable to find valid certification path to requested target         at sun.security.validator.PKIXValidator.doBuild(PKIXValidator.java:397)         at sun.security.validator.PKIXValidator.engineValidate(PKIXValidator.java:302)         at sun.security.validator.Validator.validate(Validator.java:260)         at sun.security.ssl.X509TrustManagerImpl.validate(X509TrustManagerImpl.java:324)         at sun.security.ssl.X509TrustManagerImpl.checkTrusted(X509TrustManagerImpl.java:229)         at sun.security.ssl.X509TrustManagerImpl.checkServerTrusted(X509TrustManagerImpl.java:124)         at sun.security.ssl.ClientHandshaker.serverCertificate(ClientHandshaker.java:1596)         ... 18 more Caused by: sun.security.provider.certpath.SunCertPathBuilderException: unable to find valid certification path to requested target         at sun.security.provider.certpath.SunCertPathBuilder.build(SunCertPathBuilder.java:141)         at sun.security.provider.certpath.SunCertPathBuilder.engineBuild(SunCertPathBuilder.java:126)         at java.security.cert.CertPathBuilder.build(CertPathBuilder.java:280)         at sun.security.validator.PKIXValidator.doBuild(PKIXValidator.java:392)         ... 24 more 2018-08-22 17:21:39.400  INFO 10764 --- [ost-startStop-1] m.s.d.PriorityQueueEmailSchedulerService : Closed EmailScheduler [INFO] LocalContainerEntityManagerFactoryBean - Closing JPA EntityManagerFactory for persistence unit 'default' 2018-08-22 17:21:39.431  INFO 10764 --- [ost-startStop-1] com.zaxxer.hikari.HikariDataSource       : HikariPool-1 - Shutdown initiated... 2018-08-22 17:21:39.437  INFO 10764 --- [ost-startStop-1] com.zaxxer.hikari.HikariDataSource       : HikariPool-1 - Shutdown completed. [INFO] ConditionEvaluationReportLoggingListener -  Error starting ApplicationContext. To display the conditions report re-run your application with 'debug' enabled. [ERROR] LoggingFailureAnalysisReporter - *************************** APPLICATION FAILED TO START ***************************  Description:  Failed to bind properties under 'azure.storage' to com.microsoft.azure.spring.autoconfigure.storage.StorageProperties:      Property: azure.storage.connection-string     Value: DefaultEndpointsProtocol=https;EndpointSuffix=core.windows.net;AccountName=globimage;AccountKey=j98PljOhAYdToMXHxFeLd5sC6afk1DMBeF8dfYETOYJU0j8AHp0Fkh3dgikoevByrkb2zCr4IwzST4HqjkBTUQ==     Origin: class path resource [application.properties]:60:33     Reason: HV000030: No validator could be found for constraint 'javax.validation.constraints.NotEmpty' validating type 'java.lang.String'. Check configuration for 'connectionString'  Action:  Update your application's configuration 

even though the application properties are correct application.properties:

#Server konfiguration port server.port=8087 #spring.resources.static-locations[0]=file:src/main/resources/static/ #spring.resources.static-locations[1]=classpath:/static/  #JPA Konfiguration spring.datasource.url=jdbc:sqlserver://52.230.65.127;databaseName=globdbreview spring.datasource.username=sa spring.datasource.password=Develasas spring.datasource.driverClassName=com.microsoft.sqlserver.jdbc.SQLServerDriver spring.jpa.show-sql=true spring.jpa.hibernate.ddl-auto = update spring.jpa.properties.hibernate.legacy_limit_handler=true  #SQL Server JPA konfiguration spring.jpa.properties.hibernate.dialect=com.bridgetech.glob.model.SQLServerNativeDialect spring.jpa.hibernate.naming.implicit-strategy=org.hibernate.boot.model.naming.ImplicitNamingStrategyLegacyJpaImpl spring.jpa.hibernate.naming.physical-strategy=org.hibernate.boot.model.naming.PhysicalNamingStrategyStandardImpl  #default JSP #spring.mvc.view.prefix=/WEB-INF/jsp/ #spring.mvc.view.suffix=.jsp  #Logging logging.level.org.springframework.web=INFO  #Thymeleaf konfiguration spring.thymeleaf.cache=false  # Specify the Lucene Directory #spring.jpa.properties.hibernate.search.default.directory_provider = filesystem  # Using the filesystem DirectoryProvider you also have to specify the default # base directory for all indexes  #spring.jpa.properties.hibernate.search.default.indexBase = indexpath  #Smtp mail konfiguration spring.mail.default-encoding=UTF-8 spring.mail.host=smtp.gmail.com spring.mail.username=developerglobsa@gmail.com spring.mail.password=@asdadsadsa spring.mail.port=587 spring.mail.protocol=smtp spring.mail.test-connection=false spring.mail.properties.mail.smtp.auth=true spring.mail.properties.mail.smtp.starttls.enable=true  #upload file #spring.servlet.multipart.max-file-size=50mb #spring.servlet.multipart.max-request-size=50mb  spring.mail.scheduler.enabled=true spring.mail.scheduler.priorityLevels=5  spring.mail.scheduler.persistence.enabled=false spring.mail.scheduler.persistence.redis.embedded=false spring.mail.scheduler.persistence.redis.enabled=false  #azure storage azure.storage.connection-string=DefaultEndpointsProtocol=https;EndpointSuffix=core.windows.net;AccountName=globim;AccountKey=j98PljOhAYdToMXHxFeLd5sC6afk1DMBeF8dfYETOYJU0j8AHp0Fkh3dgik 

StorageAzureService:

@Service public class StorageAzureService {      @Autowired     private CloudStorageAccount cloudStorageAccount;      public static final String storageConnectionString = \""DefaultEndpointsProtocol=[http|https];EndpointSuffix=core.windows.net;AccountName=globimage;AccountKey=j98PljOhAYdToMXHxFeLd5sC6afk1DMBeF8dfYETOYJU0j8AHp0Fkh3dgikoevByrkb2zCr4IwzST4HqjkBTUQ==\"";      final String containerName = \""image\"";      public StorageAzureService() { //      try { //          cloudStorageAccount = CloudStorageAccount.parse(storageConnectionString); //      } catch (InvalidKeyException e) { //          // TODO Auto-generated catch block //          e.printStackTrace(); //      } catch (URISyntaxException e) { //          // TODO Auto-generated catch block //          e.printStackTrace(); //      }         // TODO Auto-generated constructor stub     }      public void createContainerIfNotExists() {         try {             // Create a blob client.             final CloudBlobClient blobClient = cloudStorageAccount.createCloudBlobClient();             // Get a reference to a container. (Name must be lower case.)             final CloudBlobContainer container = blobClient.getContainerReference(containerName);             // Create the container if it does not exist.             if (container.createIfNotExists()) {                 System.out.println(\""True: \"" + containerName);             } else {                 System.out.println(\""False: \"" + containerName);             }         } catch (Exception e) {             // Output the stack trace.             e.printStackTrace();         }     }      public String uploadTextBlob(MultipartFile file  String fileName) {         try {              // Create a blob client.             final CloudBlobClient blobClient = cloudStorageAccount.createCloudBlobClient();             // Get a reference to a container. (Name must be lower case.)             final CloudBlobContainer container = blobClient.getContainerReference(containerName);             // Get a blob reference for a text file.             CloudBlockBlob blob = container.getBlockBlobReference(fileName);             // Upload some text into the blob.             blob.upload(file.getInputStream()  file.getSize());             System.out.println(\""success upload.\"" + blob.getUri().toString());             return blob.getUri().toString();         } catch (Exception e) {             // Output the stack trace.             e.printStackTrace();         }         return null;     }      public void deleteTextBlob(String fileName) {         try {             if (fileName.startsWith(\""https://globimage.blob.core.windows.net/glob-images/\"")) {                 System.out.println(\""True: https://globimage.blob.core.windows.net/glob-images/\"");                 String fileNameImgGlob = fileName.substring(52);                 // Create a blob client.                 final CloudBlobClient blobClient = cloudStorageAccount.createCloudBlobClient();                 // Get a reference to a container. (Name must be lower case.)                 final CloudBlobContainer container = blobClient.getContainerReference(containerName);                 // Get a blob reference for a text file.                 CloudBlockBlob blob = container.getBlockBlobReference(fileNameImgGlob);                 // Upload some text into the blob.                 if (blob.exists()) {                     blob.deleteIfExists();                     System.out.println(\""success delete.\"" + blob.getUri().toString());                 } else {                     System.out.println(\""file not found.\"");                 }             }         } catch (Exception e) {             // Output the stack trace.             e.printStackTrace();         }     }      public void deleteShareFile() {          try {              // Create the file client.             CloudFileClient fileClient = cloudStorageAccount.createCloudFileClient();              // Get a reference to the file share             CloudFileShare share = fileClient.getShareReference(\""sampleshare\"");              if (share.deleteIfExists()) {                 System.out.println(\""sampleshare deleted\"");             }         } catch (Exception e) {             e.printStackTrace();         }      }      public void createDirectory() {          try {             // Create the file client.             CloudFileClient fileClient = cloudStorageAccount.createCloudFileClient();              // Get a reference to the file share             CloudFileShare share = fileClient.getShareReference(\""sampleshare\"");              // Get a reference to the root directory for the share.             CloudFileDirectory rootDir = share.getRootDirectoryReference();              // Get a reference to the sampledir directory             CloudFileDirectory sampleDir = rootDir.getDirectoryReference(\""sampledir\"");              if (sampleDir.createIfNotExists()) {                 System.out.println(\""sampledir created\"");             } else {                 System.out.println(\""sampledir already exists\"");             }         } catch (Exception e) {             // TODO: handle exception         }      }      public void deleteDirectory() {         try {             // Create the file client.             CloudFileClient fileClient = cloudStorageAccount.createCloudFileClient();              // Get a reference to the file share             CloudFileShare share = fileClient.getShareReference(\""sampleshare\"");              // Get a reference to the root directory for the share.             CloudFileDirectory rootDir = share.getRootDirectoryReference();              // Get a reference to the directory you want to delete             CloudFileDirectory containerDir = rootDir.getDirectoryReference(\""sampledir\"");              // Delete the directory             if (containerDir.deleteIfExists()) {                 System.out.println(\""Directory deleted\"");             }         } catch (Exception e) {             // TODO: handle exception         }     }      public void listFilesAndDirectories() {         try {             // Create the file client.             CloudFileClient fileClient = cloudStorageAccount.createCloudFileClient();              // Get a reference to the file share             CloudFileShare share = fileClient.getShareReference(containerName);             // Get a reference to the root directory for the share.             CloudFileDirectory rootDir = share.getRootDirectoryReference();              for (ListFileItem fileItem : rootDir.listFilesAndDirectories()) {                 System.out.println(fileItem.getUri());             }         } catch (Exception e) {             e.printStackTrace();         }     }      public void uploadFile() {         try {             // Create the file client.             CloudFileClient fileClient = cloudStorageAccount.createCloudFileClient();              // Get a reference to the file share             CloudFileShare share = fileClient.getShareReference(\""share-images\"");              if (share.createIfNotExists()) {                 System.out.println(\""New share created\"");             }              // Get a reference to the root directory for the share.             CloudFileDirectory rootDir = share.getRootDirectoryReference();              // Define the path to a local file.             final String filePath = \""D:\\\\uploads\\\\my.jpg\"";              CloudFile cloudFile = rootDir.getFileReference(\""my.jpg\"");             cloudFile.uploadFromFile(filePath);         } catch (Exception e) {             // TODO: handle exception             System.out.println(e.getMessage());         }     }      public void downloadFile() {         try {             // Create the file client.             CloudFileClient fileClient = cloudStorageAccount.createCloudFileClient();              // Get a reference to the file share             CloudFileShare share = fileClient.getShareReference(\""sampleshare\"");              // Get a reference to the root directory for the share.             CloudFileDirectory rootDir = share.getRootDirectoryReference();              // Get a reference to the directory that contains the file             CloudFileDirectory sampleDir = rootDir.getDirectoryReference(\""sampledir\"");              // Get a reference to the file you want to download             CloudFile file = sampleDir.getFileReference(\""SampleFile.txt\"");              // Write the contents of the file to the console.             System.out.println(file.downloadText());         } catch (Exception e) {             // TODO: handle exception         }     }      public void deleteFile2() {         try {             // Create the file client.             CloudFileClient fileClient = cloudStorageAccount.createCloudFileClient();              // Get a reference to the file share             CloudFileShare share = fileClient.getShareReference(\""sampleshare\"");              // Get a reference to the root directory for the share.             CloudFileDirectory rootDir = share.getRootDirectoryReference();              // Get a reference to the directory where the file to be deleted is in             CloudFileDirectory containerDir = rootDir.getDirectoryReference(\""sampledir\"");              String filename = \""SampleFile.txt\"";             CloudFile file;              file = containerDir.getFileReference(filename);             if (file.deleteIfExists()) {                 System.out.println(filename + \"" was deleted\"");             }         } catch (Exception e) {             // TODO: handle exception         }     }  } 

GlobApplication:

@ComponentScan(basePackages = \""com.bridgetech.glob\"") @SpringBootApplication @EnableEmailTools public class GlobApplication //implements ApplicationContextAware {      @Autowired     EmailSenderService emailSenderService; //   //  private ApplicationContext applicationContext;      @Bean     WebMvcConfigurer configurer() {         return new WebMvcConfigurer() {             @Override             public void addResourceHandlers(ResourceHandlerRegistry registry) {                 registry.addResourceHandler(\""/static/**\"").addResourceLocations(\""classpath:/static/\"");             }         };     }      public static void main(String[] args) {         SpringApplication.run(GlobApplication.class  args);     }  //  @Override //  public void setApplicationContext(ApplicationContext applicationContext) throws BeansException { //      this.applicationContext = applicationContext; //  } //     @PostConstruct     public void sendEmail() throws UnsupportedEncodingException  InterruptedException  CannotSendEmailException  URISyntaxException {         emailSenderService.scheduleSixEmails(1);  //      close();     }  //  private void close() { //      TimerTask shutdownTask = new TimerTask() { //          @Override //          public void run() { //              ((AbstractApplicationContext) applicationContext).close(); //          } //      }; //      Timer shutdownTimer = new Timer(); //      shutdownTimer.schedule(shutdownTask  TimeUnit.SECONDS.toMillis(20)); //  }  } 

pom.xml:

<?xml version=\""1.0\"" encoding=\""UTF-8\""?> <project xmlns=\""http://maven.apache.org/POM/4.0.0\""     xmlns:xsi=\""http://www.w3.org/2001/XMLSchema-instance\""     xsi:schemaLocation=\""http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd\"">     <modelVersion>4.0.0</modelVersion>      <groupId>com.bridgetech</groupId>     <artifactId>glob</artifactId>     <version>0.0.1-SNAPSHOT</version>     <packaging>war</packaging>      <name>glob</name>     <description>Demo project for Spring Boot</description>      <parent>         <groupId>org.springframework.boot</groupId>         <artifactId>spring-boot-starter-parent</artifactId>         <version>2.0.2.RELEASE</version>         <relativePath /> <!-- lookup parent from repository -->     </parent>      <properties>         <project.build.sourceEncoding>UTF-8</project.build.sourceEncoding>         <project.reporting.outputEncoding>UTF-8</project.reporting.outputEncoding>         <java.version>1.8</java.version>         <azure.version>2.0.4</azure.version>     </properties>      <dependencyManagement>         <dependencies>             <dependency>                 <groupId>com.microsoft.azure</groupId>                 <artifactId>azure-spring-boot-bom</artifactId>                 <version>${azure.version}</version>                 <type>pom</type>                 <scope>import</scope>             </dependency>         </dependencies>     </dependencyManagement>      <dependencies>         <dependency>             <groupId>com.microsoft.azure</groupId>             <artifactId>azure-storage-spring-boot-starter</artifactId>         </dependency>         <dependency>             <groupId>org.json</groupId>             <artifactId>json</artifactId>             <version>20180130</version>         </dependency>          <dependency>             <groupId>org.springframework.boot</groupId>             <artifactId>spring-boot-starter-actuator</artifactId>         </dependency>         <dependency>             <groupId>org.springframework.boot</groupId>             <artifactId>spring-boot-starter-data-jpa</artifactId>         </dependency>         <dependency>             <groupId>org.springframework.boot</groupId>             <artifactId>spring-boot-starter-web</artifactId>         </dependency>          <!-- <dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-devtools</artifactId>              <scope>runtime</scope> </dependency> -->         <dependency>             <groupId>com.microsoft.sqlserver</groupId>             <artifactId>mssql-jdbc</artifactId>             <scope>runtime</scope>         </dependency>         <dependency>             <groupId>org.springframework.boot</groupId>             <artifactId>spring-boot-starter-tomcat</artifactId>             <scope>provided</scope>         </dependency>         <dependency>             <groupId>org.springframework.boot</groupId>             <artifactId>spring-boot-starter-test</artifactId>             <scope>test</scope>         </dependency>          <dependency>             <groupId>javax.servlet</groupId>             <artifactId>jstl</artifactId>         </dependency>          <dependency>             <groupId>org.apache.tomcat.embed</groupId>             <artifactId>tomcat-embed-jasper</artifactId>             <scope>provided</scope>         </dependency>          <dependency>             <groupId>org.springframework.boot</groupId>             <artifactId>spring-boot-starter-security</artifactId>         </dependency>          <dependency>             <groupId>org.springframework.boot</groupId>             <artifactId>spring-boot-starter-thymeleaf</artifactId>         </dependency>          <dependency>             <groupId>org.hibernate.validator</groupId>             <artifactId>hibernate-validator</artifactId>         </dependency>          <dependency>             <groupId>commons-beanutils</groupId>             <artifactId>commons-beanutils</artifactId>             <version>1.9.3</version>         </dependency>           <dependency>             <groupId>org.springframework.boot</groupId>             <artifactId>spring-boot-starter-mail</artifactId>         </dependency>          <dependency>             <groupId>com.icegreen</groupId>             <artifactId>greenmail</artifactId>             <version>1.5.5</version>             <scope>test</scope>         </dependency>          <!-- bootstrap and jquery -->         <dependency>             <groupId>org.webjars</groupId>             <artifactId>bootstrap</artifactId>             <version>3.3.7</version>         </dependency>         <dependency>             <groupId>org.webjars</groupId>             <artifactId>jquery</artifactId>             <version>3.2.1</version>         </dependency>          <!-- https://mvnrepository.com/artifact/org.apache.poi/poi -->         <dependency>             <groupId>org.apache.poi</groupId>             <artifactId>poi</artifactId>             <version>3.17</version>         </dependency>          <!-- https://mvnrepository.com/artifact/com.lowagie/itext -->         <dependency>             <groupId>com.lowagie</groupId>             <artifactId>itext</artifactId>             <version>2.1.7</version>         </dependency>          <dependency>             <groupId>javax.xml.bind</groupId>             <artifactId>jaxb-api</artifactId>         </dependency>          <dependency>             <groupId>org.eclipse.persistence</groupId>             <artifactId>eclipselink</artifactId>             <version>2.7.0</version>         </dependency>          <!-- https://mvnrepository.com/artifact/org.springframework/spring-webmvc -->         <dependency>             <groupId>org.springframework</groupId>             <artifactId>spring-webmvc</artifactId>         </dependency>          <dependency>             <groupId>org.apache.commons</groupId>             <artifactId>commons-lang3</artifactId>         </dependency>          <!-- https://mvnrepository.com/artifact/com.itextpdf/itextpdf -->         <dependency>             <groupId>com.itextpdf</groupId>             <artifactId>itextpdf</artifactId>             <version>5.5.13</version>         </dependency>          <dependency>             <groupId>org.json</groupId>             <artifactId>json</artifactId>             <version>20160810</version>         </dependency>          <dependency>             <groupId>com.google.firebase</groupId>             <artifactId>firebase-admin</artifactId>             <version>5.9.0</version>         </dependency>          <dependency>             <groupId>it.ozimov</groupId>             <artifactId>spring-boot-email-core</artifactId>             <version>0.6.3</version>         </dependency>          <dependency>             <groupId>it.ozimov</groupId>             <artifactId>spring-boot-freemarker-email</artifactId>             <version>0.6.3</version>         </dependency>          <!-- <dependency>             <groupId>org.springframework.boot</groupId>             <artifactId>spring-boot-configuration-processor</artifactId>             <optional>true</optional>         </dependency> -->     </dependencies>      <build>         <plugins>             <plugin>                 <groupId>org.springframework.boot</groupId>                 <artifactId>spring-boot-maven-plugin</artifactId>             </plugin>             <plugin>                 <groupId>org.apache.maven.plugins</groupId>                 <artifactId>maven-resources-plugin</artifactId>                 <configuration>                     <nonFilteredFileExtensions>                         <nonFilteredFileExtension>ttf</nonFilteredFileExtension>                         <nonFilteredFileExtension>woff</nonFilteredFileExtension>                         <nonFilteredFileExtension>woff2</nonFilteredFileExtension>                     </nonFilteredFileExtensions>                 </configuration>             </plugin>          </plugins>     </build>   </project> 

How do I fix this problem thank you

""",azure-storage +34053720,"Grunt VSO VM MSBuild

I've been trying install grunt on the vm for vso task but for some reason i get an error \null --gruntfile\""

Does any body know from live experience how to install grunt properly?

\""grunt

task setup works fine with hosted but not with my vm. Many thanks for any help

""",azure-devops +47430581,"War deployment on Tomcat in AZure Webapp

I have a Web app created in Azure App service which have tomcat enabled. Tomcat seems to be running fine. I am able to login to Manager and start war deployment. After the deployment reaches to 100% the page refresh or it actually starts the war deployment the page displays \The resource you are looking for has been removed had its name changed or is temporarily unavailable.\"".

There seems to be some permission problem but not getting how specifically to resolve it.

""",azure-web-app-service +51065229,"Not able to Map Workspace in local folder using VS 2017

I was trying to map workspace due to some issue I have canceled the process and deleted the entry from Manage Workspace. But when I retried the process I am getting below error

\The workspace [workspaceName];[Owner] already exists on computer [ComputerName]\"" 

I have tried below things to resolve it

1) using VS Command prompt

First display list of workspaces for named computer giving workspace name and owner:  >tf  workspaces /computer:oldComputerName /collection:”http://devsrvr:8080/tfs/DefaultCollection&#8221;    To delete:  >tf workspace /delete WorkSpaceName;OwnerName /collection:”http://devsrvr:8080/tfs/DefaultCollection&#8221; 

But listing command is not showing any workspace. so this option dosen't help me. I got the help reference form here

2) Tried Repair Local Visual Studio TFS Workspace Mapping by clearing cache data from %localappdata%\\Microsoft\\Team Foundation\\5.0\\Cache. This option also didn't work for me. I am still getting same error. Reference

3) Checked Control Panel >> User Account >> Mange Password for deleted the enetries (It is used to work with older VS version). But this also didn't work.

Please let me know if any one know the resolution.

""",azure-devops +45624262,Azure deploying a web.config file with connection string in plain text

The web.config file within my code just contains my local dev database connection string and when I deploy my web app to Azure it is correctly taking my database's connection string from the Connection Strings entry within Application Settings on the Portal. However it is then creating and deploying a web.config file with this string in plain text (I can see this if I check the file via FTP).

Is this the correct behaviour? I don't really want the connection string to be stored in plain text within the deployed web.config file (however secure that may already be).

Is it now a case of encrypting that section of the web.config file via some build/deploy step? I have seen this mentioned in other posts but it's unclear how to do it on Azure.

N.B. Apologies if this has already been asked by I've done a lot of searching and just can't find anything referring directly to the final web.config file deployed.

,azure-web-app-service +52404870,AAD Logout Azure Active Directory

I deployed a web app in azure with authorization/authentication being set-up. Once you logged in the web app you would be able to get the token using:

https:{webappname}.azurewebsites.com/.auth/me

then i tried to get the token and used it in postman using AUTHORIZATION header and it worked i was able to access the site with postman using that token. Now my concerns in after i logged out using:

https:{webappname}.azurewebsites.com/.auth/logout

I can still access the site using the token that i got recently. can someone explain why is this happening.

Thanks :D

,azure-web-app-service +51443030,"Node.js/Azure: upload HTML to BlobStorage

I have a frontend which sends the HTML of that page to a Node.js server. The server should then send that HTML to Azure BlobStorage.

Here is my express route to handle this:

router.post(\/sendcode\""  function(req  res) {   let code = \""\"";   code = req.body.code;   console.log(code);   let service = storage.createBlobService(process.env.AccountName  process.env.AccountKey);   service.createContainerIfNotExists(\""htmlcontainer\""  function(error  result  response) {     if (error) {       throw error;     } else {       service.createBlockBlobFromStream(\""htmlcontainer\""  code  function(err  result  response) {         if (err) {           throw err;         } else {           console.log(result);           console.log(response);         }       });     }   }); }); 

When I call this route I receive this in my console:

<html><style>* { box-sizing: border-box; } body {margin: 0;}</style><body></body></html> 

How can I send it to BlobStorage? Avoid the method I used as it maybe wrong because I can't figure out what function to use because of scarce documentation.

""",azure-storage +50613574,"Azure Durable Functions \Not Found\""

I am trying to run the Microsoft durable functions sample. The article is here. When I run the functions project (C# Visual Studio) it appears to launch fine the CLI spins up and I get the Host Initialised and the two start URLs listed.

Http Functions:          HttpStart: http://localhost:7071/orchestrators/{functionName}          HttpSyncStart: http://localhost:7071/orchestrators/{functionName}/wait 

However when I navigate to a function to start it it tell me \""Not Found\"" e.g. through:

http://localhost:7071/orchestrators/E1_HelloSequence

I get \""Not Found\"":

[30/05/2018 21:17:40] Executed HTTP request: { [30/05/2018 21:17:40]   \""requestId\"": \""9b82e4b2-c0df-4cf4-a191-ce7d7709d30f\""  [30/05/2018 21:17:40]   \""method\"": \""GET\""  [30/05/2018 21:17:40]   \""uri\"": \""/orchestrators/E1_HelloSequence\""  [30/05/2018 21:17:40]   \""authorizationLevel\"": \""Anonymous\""  [30/05/2018 21:17:40]   \""status\"": \""NotFound\"" [30/05/2018 21:17:40] } 

Any idea why this most basic of samples if giving me such a headache? I have tried many different combinations all to no avail.

""",azure-functions +16519922,"Azure RDP using public IP not DNS....?

I and unable to RDP Azure VM on my corporate network using \DNS:Port\"" (like vmname.cloudapp.net:3389). It works fine on my home network which means endpoints are set correctly.

However it was possible to RDP VM using Public IP but not anymore. With public IP I was able to RDP VM on my corporate network but not sure this has restricted recently?

Any way of to access a VM using Public IP rather DNS:Port format?

Thanks

""",azure-virtual-machine +40808441,"Azure Cloud Service (Classic) Staging slot fails to start and gives no errors

I have inherited a project running on Azure's Cloud Service (Classic). I have a build of the app that when pushing to the staging slot will not launch the staging slot. I can find no errors anywhere in the table logs or activity log stating a failure of any kind. The activity log actually shows Write DeploymentSlots Accepted. The staging slot sits with blank information.

\""enter

There is also a \""test\"" environment set up for this app and the same build pushed to that starts with no issue. As far as I can tell the two environments are identical.

""",azure-web-app-service +55249552,"Building an MSI package from Powershell

We are setting up a new Azure DevOps pipeline to deploy our (awful) legacy software to our private server. Our goal is to use a modified Git Flow process to build and deploy our software for the Dev Stage and then Production (or Release) servers. We are using Cake scripts and Powershell scripts for the building of the different pieces of our software.

One of the requirements from my Software Manager is to build MSI packages for our software (at least for the Production builds when we version a new release). There are two problems that arise from this:

1) Back end software is made up of several projects with all kinds of weird dependencies off of each other (and on external SSIS project which we need to consider as a \black box\"" outside of our project that I have no control over) and 1 executable which uses most but not all of the DLLs from the building of the other projects.

The front end software is a Sitecore project which is simply a bunch of DLLs and files that need to be copied from one place to another with an IIS restart to refresh the servers.

The back end and the front end will likely have separate Setup projects. But in each setup project do I just add in all of the built projects' output to the Application Folder and hope for the best that they all get put in the right output folders on MSI install?

2) How do I instantiate the build of the Setup project (the project which builds the installer) from Cake Build and/or Powershell? I want to make sure this only runs for the Release builds that we build from the master branch. Is there an Azure tool I need to be made aware of.

Please be understanding as this is my first full DevOps implementation I haven't built an .NET installer package in 10 years since school and my Powershell skills suck (came from a Web development/Linux world).

""",azure-devops +47636141,"Azure Functions: CosmosDBTrigger connection string storage

I asked yesterday where to store the connection string for a CosmosDBTrigger. It worked great until I had to push it up to Azure. Now the function isn't working at all. It works locally just fine though. There is no difference between codebases so the only thing I can think of is the connection string isn't be pulled from local.settings.json when on Azure. I mean it wouldn't surprise me if that was the case since the file has the word local in it.

I tried putting the contents in the host.json but that didn't work either.

How do you specify the connection string when your Function is running on Azure?

""",azure-functions +52317092,Using azure app service deployment slots to run different apps?

Deployment slots on azure app service are really intended to run new versions of same app for blue/green deploy strategies. The question I have is it against the rules to run an app with multiple components (front end/back end) to put them into different deployment. On Standard plan I can load up to 5 services into a single azure app service plan. This would be great as cost saving measure in non prod environments where a single instance of each service is just fine. The question I have is a) is this against the rules? b) are there any pitfalls with this strategy?

Thanks

,azure-web-app-service +53679960,"The Pull Request merge commit has unexpected first parent (VSTS)

In our system every PR triggers a PR validation build on a build controller where:

  1. The build controller workspace is updated to origin/master
  2. The PR is merged
  3. The PR merge commit is checked out
  4. The build is triggered

My understanding is that the PR merge commit will have the following two parents:

  1. origin/master
  2. The last commit in the PR

However this is not always the case!

Please observe:

Get Sources build step output

2018-11-27T15:39:21.3096756Z    bf58eb148..b00bf1df0  master               -> origin/master 2018-11-27T15:39:21.3099964Z  * [new ref]             refs/pull/3987/merge -> pull/3987/merge 2018-11-27T15:39:31.3045930Z ##[command]git checkout --progress --force refs/remotes/pull/3987/merge 2018-11-27T15:39:32.8530040Z Previous HEAD position was ce1d1c670... Merge pull request 3982 from wfm/work/pbi476403 into master 2018-11-27T15:39:32.8530496Z HEAD is now at 81317ea59... Merge pull request 3987 from onboarding/476463-Automation_GettingStarted_Performance_Improvements into master 

The PR 3987 contains only one commit:

\""enter

From which my logic tells me that:

  1. At that moment origin/master = b00bf1df0
  2. The local PR merge commit i.e. pull/3987/merge = 81317ea59
  3. The first parent of pull/3987/merge would be origin/master i.e. b00bf1df0
  4. The second parent of pull/3987/merge would be the last commit of the PR i.e. b7d9617fc

Now I will go the build controller and check there:

\""enter

I see that the first parent is not b00bf1df0 but some other commit 959f488bb.

I do not understand how this is possible. Can anyone explain?

""",azure-devops +51408740,Azure Web App FTP 550 Access Denied

I am trying to use FTP to upload specific files (not a full release) to an Azure Web App. Essentially I am using a PowerShell script to FTP files up to the web app in Azure. I can add new files create files and folders but when I try to overwrite or delete a file I get a 550 Access is denied.

I tried creating a a new deployment credential and was able to log in but the result was the same when trying to delete anything; 550 Access is Denied.

Is there any way to grant more permissions to this user or is this impossible? Thanks!

,azure-web-app-service +48437609,How to let docker container log output sdb disk in Azure cloud

Currently my cloud environment based on Azure cloud and the VM type is D3V2(14G/200G) Azure VM provide OS disk called sda that has 30 GB and Ephemeral disk called sdb that has 200 GB. My container orchestration is based on mesos/marathon all containers logs output to sda not sdb so after several days the disk space is full. How to utilize the sdb disk and let the logs can output to sdb?

following is output of df -k in my VM:

root@dcos-agentprivate-service000003:/home/test# df -k Filesystem     1K-blocks    Used Available Use% Mounted on udev             7155096       0   7155096   0% /dev tmpfs            1435028  134880   1300148  10% /run /dev/sda1       30428648 7590548  22821716  25% / tmpfs            7175128       0   7175128   0% /dev/shm tmpfs               5120       0      5120   0% /run/lock tmpfs            7175128       0   7175128   0% /sys/fs/cgroup overlay         30428648 7590548  22821716  25% /var/lib/docker/overlay/2f479dc1d57d3ef848f957ae4b1751dc68eafab44249aa2028492eb44491ba9a/merged shm                65536       0     65536   0% /var/lib/docker/containers/6301aefde9113502687503e77e65ce6bb8c2d39eb678944c10a41198cbd58e74/shm overlay         30428648 7590548  22821716  25% /var/lib/docker/overlay/89ce9ca51728b61d29778131a7f2e6b859cba6b70b1777664d62b5154f741ac8/merged shm                65536       0     65536   0% /var/lib/docker/containers/5bb83ee7bd75b92c16044d03b216754e127ca9d53eb3fb17df88f881f7928f79/shm overlay         30428648 7590548  22821716  25% /var/lib/docker/overlay/4986839f2046dab76caacea057590af338433dda133165920facf8b62901bbf4/merged shm                65536       0     65536   0% /var/lib/docker/containers/58eb5ae765785d35941133607e9667214c65a89743325767755b4ff3f4a89e27/shm overlay         30428648 7590548  22821716  25% /var/lib/docker/overlay/ca68a2ce6c4b35a0efc2d8d5748e295686402f195b62bcf5aeb5e0b7b0d8af6c/merged shm                65536       0     65536   0% /var/lib/docker/containers/619b5dac97338672669420f9aec7cd72b83ed2cd170d0f3f98ca8bdc4a139d77/shm overlay         30428648 7590548  22821716  25% /var/lib/docker/overlay/b1e4f59da0d59d1cfff6725827dad8da1bc80913c37be8b3468bcd103a781c53/merged shm                65536       0     65536   0% /var/lib/docker/containers/5127f81741c87759df9865a417beaf2c8d09871680b6de66b5e3cbe51385354d/shm overlay         30428648 7590548  22821716  25% /var/lib/docker/overlay/65a999e7d61d67384db9e955cf55a363129d124dc5385fe38ccd0688053239f2-init/merged overlay         30428648 7590548  22821716  25% /var/lib/docker/overlay/8356b0e83f436e02a7445cdc1a89286cdd731f57826dccf98f4e46b006f430e0-init/merged overlay         30428648 7590548  22821716  25% /var/lib/docker/overlay/28eecf196b970f824e72bdf000a6da4725be3408ec47d3c6477054e39c12167f-init/merged overlay         30428648 7590548  22821716  25% /var/lib/docker/overlay/3c12ab53b60588bba34b96eca8591ec063bc9072ac925b6e8f2d1e9f62492583-init/merged overlay         30428648 7590548  22821716  25% /var/lib/docker/overlay/465eb669a1bc191e9a798636b5441ccdc4d9f414063cd9a83c3921a633e8c2c5-init/merged overlay         30428648 7590548  22821716  25% /var/lib/docker/overlay/7210916bf5ffe265223c71d902e9c34ab8a1489655470cba94b7d6a8d5123991-init/merged tmpfs            1435028       0   1435028   0% /run/user/1000 
,azure-virtual-machine +52470234,"View TraceWriter.Info() logs in Azure Function without Application Insights

My Azure Function app uses the TraceWriter.Info() method to write logs. Very simple to use and it used to be very simply to view:

public static void Run([TimerTrigger(\0 0 22 * * *\""  RunOnStartup = false)]TimerInfo myTimer  TraceWriter log) {    log.Info(\""My log!\""); } 

However now when I go to the portal and click the \""Monitor\"" tab which used to show the output in a console-like window it demands that I setup Application Insights whilst giving me an error:

\""enter

I cannot click the \""Configure\"" button to set it up. As far as I know my app specifies nothing about Application Insights yet the error seems to suggest that it does.

My questions:

  • Is it possible to get the old console window back?
  • Failing that why can I not configure app insights?
""",azure-functions +56434124,"Deploy console webjob app to azure App service containing a webapi using azure devops pipelines

I have 2 webAPI’s written in .net core 2.2.The 2 web api’s are triggered by web jobs which are console Apps in .netcore 2.2. They are all different projects and in different repositories in Azure DevOps.

I am trying to deploy the web Api's together with the webjob into 2 web app services(eg: WebApi1 + Web job1 into App service1 and WebApi2 + Web job2 into App service2) in Azure using the Azure DevOps build and release pipelines.

I am able to add the webjobs manually into App Service from Azure portal and it works fine.But I want to deploy it using Azure DevOps pipelines.

I tried different ways to publish the web jobs(console apps) with the web api in the app service like trying to publish it to App_Data folder from Azure DevOps.

I mainly followed the blog below.

https://www.andrewhoefling.com/Blog/Post/deploying-dotnet-core-webjobs-to-azure-using-azure-pipelines

But when I try to publish the webjob it overwrites the web api code(all the 4 projects have seperate build/release pipelines). The webjob code gets deployed in the site/wwwroot folder rather than the site/job folder.

\""enter

My Build steps:

\""enter \""enter

My Release steps:

\""enter

I am not sure what I am doing wrong. Is there a way to copy the webjobs files into the same app service without overwritting the actual webapi code?

""",azure-devops +57215531,"Coverage status check failed?

I committed the changes to the pull request and yt shows

\Code coverage status failed.\"".

I have searched a lot but couldn't find the cause or solution to resolve this.

Azure pipeline test service  Diff coverage check failed.0/70 (0.00 %) changed lines are covered up to update 2. Diff coverage target is 70.00 %.  

Verification build is successful but the status is showing code coverage has failed.

""",azure-devops +55580436,"Restrict invocation of Azure functions to portal and timers

I'm trying to clear the following recommendation from the Azure portal:

\""Restrict

The App Services in question only exist to host Functions and those functions are only called by a timer (or if done manually through the portal). I don't need them open to the wider internet at all but I do need them visible to the Azure portal itself.

The only settings I see though are IP-based. Is there a specific set of IPs to whitelist for the Azure portal and timers to still work? (I tried \""deny-all\"" but then I get a message on the function overview page saying \""Access restrictions have been added to your function app which may affect your ability to manage it from the portal.\"")

""",azure-functions +4494040,"Azure: How to create the WADLogsTable for capturing diagnostics code?

I have a worker role that I would like to get diagnoistics feedback on... after adding the appropriate connection string to the ServiceConfiguration.cscfg and the following code:

//DiagnosticMonitor.Start(\DiagnosticsConnectionString\""); DiagnosticMonitorConfiguration diagConfig = DiagnosticMonitor.GetDefaultInitialConfiguration(); diagConfig.WindowsEventLog.DataSources.Add(\""Application!*\""); diagConfig.WindowsEventLog.ScheduledTransferPeriod = System.TimeSpan.FromMinutes(5.0); diagConfig.Logs.ScheduledTransferPeriod = System.TimeSpan.FromMinutes(5.0);  Microsoft.WindowsAzure.Diagnostics.DiagnosticMonitor.Start(\""DiagnosticsConnectionString\""  diagConfig); CrashDumps.EnableCollection(true); 

When I call \""System.Diagnostics.Trace.TraceInformation(\""test log\"") I expect to be able to find the record in the WADLogsTable of the target Azure Storage Account. Howver the table doesn't exist- how is it created? None of the documentation I've read covers this.
Thanks in advance

""",azure-storage +51749573,Does Azure Storage Emulator Support File Shares?

I have the Azure Storage Emulator running but it currently looks like (as of v5.6.0.0) that it only supports Blob Queue and Table storage:

BlobEndpoint: http://127.0.0.1:10000/ QueueEndpoint: http://127.0.0.1:10001/ TableEndpoint: http://127.0.0.1:10002/

The confusing part is when you configure a local connection via the desktop Azure Storage Explorer by selecting Attached to a local emulator there's an option for Files port.

Am I missing something here?

,azure-storage +52271088,Execute python scripts in Azure DataFactory

I have my data stored in blobs and I have written a python script to do some computations and create another csv. How can I execute this in Azure Data Factory ?

,azure-storage +54929065,PersistKeysToAzureBlobStorage(): Is there an equivalent Method for .Net Framework 4.6/4.x Applications?

Is there an equivalent method to the.Net Core PersistKeysToAzureBlobStorage() method which is available for .Net Framework 4.6 applications?

I have a .Net Framework 4.6 MVC app and would like to persist the encryption key used for cookie encryption/decryption to an Azure container. I've found that this can be done for .Net Core applications but need to do the same in a .Net Framework 4.6 app.

,azure-storage +55724970,"Azure build pipeline yaml template reference

I have a step defintion template that I intend to use within build pipelines. The step definition's location is not under the same folder as the build pipeline itself.

During the validation of the pipeline AzureDevops considers build pipeline's location as the root location. This is appended to the path of the reference

consider the following example of code hierarchy

 azure    |----products            |----resource-type1                         |----step-def.yaml            |----resource-type2                         |----step-def.yaml    |----solutions            |----solution1                     |----local-step-def.yaml                     |----build.yaml            |----solution2                     |----build.yaml 

Following works when the build.yaml is as below

jobs: - job: Linux   pool:     vmImage: 'ubuntu-16.04'   steps:   - template: solution1/local-step-def.yml 

If you change the template reference as below it does not work

  - template: ../products/resource-type1/step-def.yml 

When validation is done on the pipeline azure-devops maps to

# <path-of-the-build-pipeline>/<template-ref> azure/solutions/solution1/<template-reference> 

Here is the documentation https://docs.microsoft.com/en-us/azure/devops/pipelines/process/templates?view=azure-devops#step-re-use

So how can I map to the step-def.yaml file that lives in the products folder hierarchy?

""",azure-devops +45436313,How to migrate code from TFS 2012 to Visual Studio team services?

We are planning to migrate from tfs 2012 to VSTS wanted to check if there is any method that can be used to migrate the source code that is in in TFS to VSTS (either GIT or TFVC)?

Many Thanks

,azure-devops +48429494,"Move DevTest Lab VM to another DevTest Lab

I'm trying to move a VM custom image from one DevTest Lab to another and can't seem to find an easy way to accomplish that. My VM is using managed disks and also has a data disk.

I've read the following article https://azure.microsoft.com/en-us/updates/azure-devtest-labs-changes-in-exporting-custom-image-vhd-files/ and it states that

Azure DevTest Lab now generates a managed image and \""…This allows Sysprep'ed/deprovisioned custom images to support data disks in addition to the OS disk through a single image.\""

This is fine but the image that is created can't be exported.

Is it even possible to accomplish am I missing something?

Thanks for your help

""",azure-virtual-machine +49887807,What is the bandwidth of a Standard B1ms Virtual Machine?

I cannot find any documentation on what is the bandwidth of a Standard B1ms machine.. if you have any info please share.

,azure-virtual-machine +55580136,"How to get the Azure function app operationid in my .netcore code

I have created .netcore 2.0 Azure Function App for Linux. Can i get the below operationId in my code. For tracking purpose we need this. While logging exception we want to include this operationId also in my .net core code. \""enter

""",azure-functions +39637572,"Automatically generated region certificate in Azure

In Azure portal when I get the list of all resource I can see a machine-generated resource of type Microsoft.Web/certificates which I (as Admin) cant view the details. At what point does this resource gets created and is this region specific? The reason I ask about the region is that its name contains the name of the region itself. If that is region specific then I should have two of these automatically generated certificates because I have resources in two region. Could this resource have been generated by visual studio publishing tool? How can I know more about this resource?

\""enter

""",azure-web-app-service +50113731,"read image path from file share and store in table storage azure

I'm able to upload the image in the Azure file share using below code.

CloudStorageAccount cloudStorageAccount = ConnectionString.GetConnectionString();             CloudFileClient cloudFileClient = cloudStorageAccount.CreateCloudFileClient();                          CloudFileShare fileShare = cloudFileClient.GetShareReference(\sampleimage\"");            if (await fileShare.CreateIfNotExistsAsync())             {                 await fileShare.SetPermissionsAsync(                     new FileSharePermissions                     {                      });             }             //fileShare.CreateIfNotExists();              string imageName = Guid.NewGuid().ToString() + \""-\"" + Path.GetExtension(imageToUpload.FileName);             CloudFile cloudFile = fileShare.GetRootDirectoryReference().GetFileReference(imageName);             cloudFile.Properties.ContentType = imageToUpload.ContentType;              await cloudFile.UploadFromStreamAsync(imageToUpload.InputStream);              imageFullPath = cloudFile.Uri.ToString();         }         catch (Exception ex)         {          }         return imageFullPath; 

Here is how I'm trying to read the file path: [Before insertinginto a table]

public class ReadFileSharePath {     string Path = null;     public string ReadFilePath()     {          try         {             CloudStorageAccount cloudStorageAccount = ConnectionString.GetConnectionString();             CloudFileClient cloudFileClient = cloudStorageAccount.CreateCloudFileClient();             CloudFileShare fileShare = cloudFileClient.GetShareReference(\""sampleimage\"");             if (fileShare.Exists())             {                 CloudFileDirectory rootdir = fileShare.GetRootDirectoryReference();                  CloudFileDirectory sampleDir = rootdir.GetDirectoryReference(\""sampleimage\"");                  if (sampleDir.Exists())                 {                     // Get a reference to the file we created previously.                     CloudFile file = sampleDir.GetFileReference(\""90e94676-492d-4c3c-beb2-1d8d48044e4e-.jpg\"");                      // Ensure that the file exists.                     if (file.Exists())                     {                         // Write the contents of the file to the console window.                         //Console.WriteLine(file.DownloadTextAsync().Result);                         Path = file.DownloadTextAsync().Result.ToString();                     }                 }             }          }         catch (Exception)         {              throw;         }         return Path;      }  } 

However this if condition

if (sampleDir.Exists())

is getting failed.And the control is not entering into loop.

I would like to store the path of file share in the Azure table storage. I would like to get partition key and row key. How to achieve this ? any link or suggestion would help? Thanks.

""",azure-storage +57374193,"Not able to run blob trigger when published on azure functions

I have created a simple blob trigger in visual studio for which init.py is as below

import logging  import azure.functions as func   def main(myblob: func.InputStream):    logging.info(f\Python blob trigger function processed blob \\n\""              f\""Name: {myblob.name}\\n\""              f\""Blob Size: {myblob.length} bytes\"") 

and function.json is as below

{   \""scriptFile\"": \""__init__.py\""    \""bindings\"": [      {       \""name\"": \""myblob\""        \""type\"": \""blobTrigger\""        \""direction\"": \""in\""        \""path\"": \""mycontainer/{name}\""        \""connection\"": \""AzureWebJobsStorage\""      }     ] } 

local.settings.json looks as below

{   \""IsEncrypted\"": false    \""Values\"": {   \""FUNCTIONS_WORKER_RUNTIME\"": \""python\""    \""AzureWebJobsStorage\"": \""DefaultEndpointsProtocol=https;  AccountName=****;AccountKey=*****;EndpointSuffix=core.windows.net\""    } } 

This code works fine with visual studio on local machine. But when published on azure portal it can not read blob path from function.json and gives error as

Invalid blob path specified : ''. Blob identifiers must be in the format 'container/blob'. 

I have published function using command to push contains of local.settings.json.

func azure functionapp publish FUNCTIONNAME --build-native-deps --publish-local-settings -i 

. Can anyone please guid me what I am missing after publishing.

""",azure-functions +43543108,"integrate webapp with Azure VNET

I have deployed a Web app in Azure and is available in http://XXX.azurewebsites.net. I would like to limit the access to this site by placing the web app in the Virtual Network using Point to Site.

I have created a VNET and successfully established the Point to site connection. Then i have integrated the Webapp to the created VNET.

Now Clients who dont have client certificate also able to access the site/URL. how to restrict that?

My Expected Behaviour is Clients whoever have the client certificate and vpn client package can access the site using the above url. Others should not be able to access the site using \""XXX.azurewebsites.net\"" url.

Please help me in achieving this.

""",azure-web-app-service +49641086,Use and setup of WAF with Azure App Service Web Application?

I run a number of App Service MVC Asp.Net web applications. I think it would be a good idea to add a WAF to the front the App Service website to enable OWASP protection as well as more visibility on suspicious attacks. Also I would want this to be linked into Azure Security Centre.

As far as I can see this is not a problem with VM websites but with App Service websites I have seen SO comment (April 2017) about how this may not be supported. Although this information may be outdated now.

1) Am I just trying to replace existing threat detection features that is built into App Services so adding a WAF is not required?

2) If required is App Service WAFs supported and especially linked to Azure Security Centre.

3) If required and possible then any pointers please?

By the way I have considered the use of Cloudflare as a WAF wrapper around Azure which looks interesting but intitially wanted to check out Azure functionality to start with.

Thanks.

,azure-web-app-service +52049238,"Azure Storage Service Rest APIs: Constructing Signature String(stringToSign) dynamically with the given URI input in java

Is there a way to construct a stringToSign String dynamically for any given azure storage rest API with the given URI input in java?

https://docs.microsoft.com/en-us/rest/api/storageservices/authorize-with-shared-key

""",azure-storage +55286747,Python Memcached connect to Azure VM server

I have installed Memcached on an Azure VM server (Ubuntu). I now need to connect to this from my Python program that runs elsewhere.

When they were installed on the same server this worked:

import memcache MEMCACHE_SOCKET_PATH = 'unix:<path_to>/memcached.sock'   memcache_client = memcache.Client([MEMCACHE_SOCKET_PATH]  debug=0) 

Now I'm not sure what to use for MEMCACHE_SOCKET_PATH. The VM running Memcached has a static IP address and I have created an endpoint (opened a port) to 11211. memcached.sock sits in the home directory.

This is how I am running Memcached on the VM:

memcached -d -m 500 -s $HOME/memcached.sock -P $HOME/memcached.pid 
,azure-virtual-machine +51542507,How to extract filename and other properties from multipart form data prior to uploading file to azure

I want to extract the and set the filename based on application logic on server prior to uploading file to azure..

My current scenario: Client -> Application Web Api -> Azure Storage

Following is code:

public async Task<IHttpActionResult> UploadProfilePic()     {         if (!Request.Content.IsMimeMultipartContent())         {             throw new HttpResponseException(HttpStatusCode.UnsupportedMediaType);         }          AzureStorageCls = new MyAzureStorage(WebApiApplication.AllowedImageExtensions);  //Need to set file name based on custom logic prior to upload of image  //Below method calls the  pRequest.Content.ReadAsMultipartAsync(provider) of custom MultipartStreamProvider         BlobStorage BlobImage = await AzureStorageCls.UploadImageToAzure(modifiedFileName  Request);          return Ok();      } 

We want to customize the file name that we send to azure blob storage based on ids

But as i know now we can get all properties only after calling ReadAsMultipartAsync(provider) in provider.

I am sure there must be a simpler way to solve this that I am unaware of

Any help will be highly appreciated. Thanks in advance

,azure-storage +50031391,"asp.net-core-signalr can't stay connected for more than a minute or so

Just upgraded our app service from .net 4.6 to .net core. Part of that migration was to upgrade to the new .net core version of SignalR. Everything was working fine while running locally but when it was published to an azure app service the connection doesn't stay connected for more than a minute (usually less than 30 seconds). The error that I'm seeing in the console logs is:

Error: Connection disconnected with error 'Error: Websocket closed with status code: 1006 ()'.

Has anyone else experienced this before? I ran across this post but it has very little feedback. Not sure if it has the same root cause or not.

""",azure-web-app-service +56430949,"Error \No file format header found\"" when building Cloud Service (ccproj) project in Visual Studio with Azure Pipelines

I was recently trying to set up CI for a Cloud Service project. It builds fine in Visual Studio 2017 and 2019. However when MSBuild is called to run it in Azure DevOps / ADO I get the following build error:

\""No file format header found\""

Well that's annoying! Initially I thought it was a BOM issue or an XML issue. Fixing that didn't work. Then I found some articles here about NuGet causing issues but that wasn't it either.

Here's my build step:

- task: VSBuild@1 displayName: 'Build Worker Cloud Service - INT' inputs: solution: '**\\MyService.ccproj' msbuildArgs: '/t:Publish /t:restore /p:SkipInvalidConfigurations=true /p:BclBuildImported=Ignore /p:OutputPath=bin\\ /p:PublishDir=$(build.artifactstagingdirectory)\\appcloud\\Provisioning.QA /p:TargetProfile=QA' platform: '$(BuildPlatform)' configuration: '$(BuildConfiguration_Release)' restoreNugetPackages: true

""",azure-devops +45281935,Monitoring the Azure app services

I was wondering if someone can shed some lights on the application monitoring and alerting solution that's being used to specifically monitor the Azure app service. We have multiple API apps running on App service service and we would like to monitor certain metrics (ex: Availability response time number of request received etc). I enabled the application insight on each of these apps and the result is quite promising it fulfills all my requirement but there's one small issue: I need to scroll through each app to see their performance. I can't aggregate them all in one space. I would like to create a centralized dashboard for all aforementioned metrics and have them displayed. I tried using OMS but it seems to be lacking a lot of functionality.

Any pointer would be very appreciated.

,azure-web-app-service +29496567,replication with SQL AZure from Azure VM

Is it possible to configure replication that can transfer data from SQL Azure DB to AZURE VM with SQL Server?

Basically i want to move Transactional data from Sql AZURE to local server in Azure VM where SQL Server Standard is installed. I use SSRS From the Server on Azure VM and want to run reports from this data source.

We tried to configure the Geo Replication for SQL Azure DB but unless it is a Premium DB (Which is expensive) we can not decide region for the Secondary database. By default Azure uses North for South region and similar which will make the reports run very slow.

,azure-virtual-machine +19655868,"Streaming video from Azure blob storage

I'm having problems getting an .mp4 video stored in Azure blob storage to show in a website hosted on Azure. The video type is set to video/mp4 the storage account is not public but it is linked to the web role and I've updated the version using this bit of code:

var credentials = new StorageCredentials(\myaccountname\""  \""mysecretkey\"");  var account = new CloudStorageAccount(credentials  true);  var client = account.CreateCloudBlobClient();  var properties = client.GetServiceProperties();  properties.DefaultServiceVersion = \""2012-02-12\"";  client.SetServiceProperties(properties);  

I'm not using any video player just the HTML5 video tag. I also don't need anything fancy I just want the video to play.

Looking at the network tab in Chrome's dev tools there are two entries for the GET request to fetch the video. The first one has a status of (pending) and the next one is (canceled).

I also gave it a link to a video which is in the website's content folder. This one also starts as pending but is resolved with a 204 Partial Content and the video plays just fine.

I'm out of stuff to look at and any help and pointers are appreciated.

""",azure-storage +57520028,"How to use different Service Connection for every stage in Azure Pipelines?

When using multistage pipelines from yaml in Azure Pipelines and every stage is deploying resources to a separate environment I'd like to use a dedicated service connection for each stage. In my case every stage is making use of the same deployment jobs i.e. yaml templates. So I'm using a lot of variables that have specific values dependent on the environment. This works fine except for the service connection.

Ideally the variable that contains the service connection name is added to the stage level like this:

stages: - stage: Build     # (Several build-stage specific jobs here)  - stage: DeployToDEV   dependsOn: Build   condition: succeeded()   variables:     AzureServiceConnection: 'AzureSubscription_DEV' # This seems like a logical solution   jobs:     # This job would ideally reside in a yaml template     - job: DisplayDiagnostics       pool:         vmImage: 'Ubuntu-16.04'       steps:         - checkout: none         - task: AzurePowerShell@4           inputs:             azureSubscription: $(AzureServiceConnection)             scriptType: inlineScript             inline: |               Get-AzContext             azurePowerShellVersion: LatestVersion  - stage: DeployToTST   dependsOn: Build   condition: succeeded()   variables:     AzureServiceConnection: 'AzureSubscription_TST' # Same variable  different value   jobs:     # (Same contents as DeployToDEV stage) 

When this code snippet is executed it results in the error message:

There was a resource authorization issue: \""The pipeline is not valid. Job DisplayDiagnostics: Step AzurePowerShell input ConnectedServiceNameARM references service connection $(AzureServiceConnection) which could not be found. The service connection does not exist or has not been authorized for use. For authorization details refer to https://aka.ms/yamlauthz.

So it probably can't expand the variable AzureServiceConnection soon enough when the run is started. But if that's indeed the case then what's the alternative solution to make use of separate service connections for every stage?

One option that works for sure is setting the service connection name directly to all tasks but that would involve duplicating identical yaml tasks for every stage which I obviously want to avoid.

Anyone has a clue on this? Thanks in advance!

""",azure-devops +53060229,"Azure DevOps Extension custom service endopint for ID/KEY

I am developing Azure DevOps extension which contain service endpoint to hold secret ID/KEY. My requirement is to have endpoint just consist of Connection name ID and Key in it.I have gone trough list of provided endpoints in Microsoft but I couldn't find suitable option to satisfy my requirement.

https://docs.microsoft.com/en-us/azure/devops/pipelines/library/service-endpoints?view=vsts#sep-ssh

closest solution I found is as below . But it contains input box for server URL(Which I need to omit (in this example though I don't define server URL it displays in popup dialog)). Please refer below image.

\""enter

Is it possible to remove Server URL from above dialog box Or it there better endpoint type I can use for this requirement? please be kind enough to share some light with me.

""",azure-devops +48075879,"How can I add one more property using c# into existing json object inside Azure Functions?

Inside the Azure function my input is ServiceBus queue properties

Code is to retrieve that all properties are -

using System.Net; using Newtonsoft.Json;  public static async Task<HttpResponseMessage> Run(HttpRequestMessage req  TraceWriter log) {   string jsonContent = await req.Content.ReadAsStringAsync();      return req.CreateResponse(HttpStatusCode.OK jsonContent); } 

Output is -

[     \{\\\""DeliveryCount\\\"":\\\""1\\\"" \\MessageId\\\"":\\\""bac52de2d23a487a9ed388f7313d93e5\\\""}\"" ] 

I want to add one more property into this json object how can I add it here in azure function so that I can return modified object like below -

[     \""{\\\""DeliveryCount\\\"":\\\""1\\\"" \\MessageId\\\"":\\\""bac52de2d23a487a9ed388f7313d93e5\\\"" \\\""MyProperty\\\"":\\\""TEST\\\""}\"" ] 
""",azure-functions +48517386,Which Reports can deploy in Azure Standard Website Plan?

I want to know is there any reporting tool available which supports in Azure App/Web service?

As per my reading for SSRS its compulsory required Virtual Machine. Crystal Reports also cannot be working with Azure App/Web Service?

I don't want to take other service like Power BI or Virtual Machine to run report. So please guide me is there any solution for the same?

Thanks

,azure-web-app-service +46114240,"Download Azure VHD to local use powershell

How can I download azure vhd with powershell to local machine?

I read the document but I can't find the blob url like \https://XXX.blob.core.windows.net/vhds/XXX.vhd\""

Anybody know that?

Thanks

""",azure-virtual-machine +50707153,"Azure Managed Disk not showing when trying to attach to VM

I have a premium Azure Managed Disk (SSD) in the same region as a Windows VM but when I go to attach it via the Azure portal (settings -> Disks -> + Add data disk) the drop down under name says \No managed disk available\"" (see below). What do I need to do?

\""enter

""",azure-virtual-machine +45570201,Azure create data disk and copy files to it

Is it possible to create a data disk copy files to it and then attach it to an existing virtual machine on Azure? What would be the broad steps for this to work?

,azure-virtual-machine +19541198,"Clipboard/file sharing support in Ubuntu virtual machines on Azure

I've created a new Ubuntu 12.04 virtual machine on Azure. Then using

$ sudo apt-get install ubuntu-desktop  $ sudo apt-get install xrdp 

installed the standard Ubuntu and xrdp which implements the RDP protocol.

Going to Azure portal and downloading *.rdp file then logging

\""enter

with the following check boxes still doesn't enable clipboard.

\""enter

\""enter

I've found several reference that suggest installing X11RDP but i really don't want to unless there is no other option.

Also it seems that

Text clipboard in xrdp works with certain restrictions.

Am I missing something or copy/pasting files text is not supported?

""",azure-virtual-machine +55948976,"How to get the filepath while uploading a file to azure-storage in django

I have a form in which files can be uploaded. the uploaded file has to be stored in azure-storage. I am using create_blob_from_path to upload a file to azure-storage.create_blob_from_path expects file path as one of the parameters. but how can I get file path in this case as the operation has to be in on the fly mode(The uploaded file cannot be stored in any local storage).it should get stored directly in Azure.

if request.method==\POST\"":     pic=request.FILES['pic']     block_blob_service = BlockBlobService(account_name='samplestorage'  account_key='5G+riEzTzLmm3MR832NEVjgYxaBKA4yur6Ob+A6s5Qrw==')     container_name ='quickstart'     block_blob_service.create_container(container_name)             block_blob_service.set_container_acl(container_name  public_access=PublicAccess.Container)     block_blob_service.create_blob_from_path(container_name  pic  full_path_to_file)//full_path_to_file=? 

the file uploaded dynamically has to be stored in Azure

""",azure-storage +41149423,"Avoid over-writing blobs AZURE on the server

I have a .NET app which uses the WebClient and the SAS token to upload a blob to the container. The default behaviour is that a blob with the same name is replaced/overwritten.

Is there a way to change it on the server i.e. prevents from replacing the already existing blob?

I've seen the Avoid over-writing blobs AZURE but it is about the client side.

My goal is to secure the server from overwritting blobs.

AFAIK the file is uploaded directly to the container without a chance to intercept the request and check e.g. existence of the blob.

Edited

Let me clarify: My client app receives a SAS token to upload a new blob. However an evil hacker can intercept the token and upload a blob with an existing name. Because of the default behavior the new blob will replace the existing one (effectively deleting the good one).

I am aware of different approaches to deal with the replacement on the client. However I need to do it on the server somehow even against the client (which could be compromised by the hacker).

""",azure-storage +47191570,Azure: Provisioning of virtual machine fails.

I am trying to provision Azure Virtual Machines to the same availability set one after the other. I see this error when trying to a provision in Australia East.

 Provisioning failed. Allocation failed. Please try reducing the VM size or   number of VMs  retry later  or try deploying to a different Availability Set   or different Azure location.. AllocationFailed 
,azure-virtual-machine +51072365,"Azure Function (ServiceBus) System.Private.CoreLib Error (Works locally)

When I run my Azure functions locally they work like a charm. But when I publish and run them in the cloud I get following error:

[Error] System.Private.CoreLib: Exception while executing function: Alert. Function.PowerBI: Abandom message in AzureFunction.PowerBi.

Running a function called Alert in a Function called PowerBi. Is there a way maybe in Kudu to see the actually error or how should i interpret an error in

System.Private.CoreLib

Code:

    [FunctionName(\Alert\"")]     public static async Task Alert([ServiceBusTrigger(Topic.Alert  Subscription.PowerBi  Connection = \""servicebusconnectionstring\"")] Message message  TraceWriter log)     {         if (!MessageHandler.Validate(message  Subscription.PowerBi))             return;          var json = Encoding.UTF8.GetString(message.Body);         var messageCounter = message.SystemProperties.DeliveryCount;          try         {             var alert = Validator.ValidateCloudAlert(json);             if (alert != null)             {                 var powerBiAlert = alert.ToPowerBiAlert();                 var result = await PowerBiService.AddRow(powerBiAlert);                 if (!result)                     throw new PowerBiCommandException($\""PowerBiService.AddRows returned value: {result}\"");             }         }         catch (Exception e)         {             EventLogger.LoggException(\""Function.PowerBi.Alert\""  e  new Dictionary<string  string>() { { \""Messsage\""  json } });             if (messageCounter >= 5)             {                 EventLogger.LoggEvent(\""DeadLetterQueue\""  new Dictionary<string  string>() { { \""Function\""  \""Function.PowerBi.Alert\"" }  { \""Messsage\""  json } });                 await QueueService.SendAsync(Queue.Deadletter  JsonConvert.DeserializeObject<CloudAlert>(json)  Topic.Alert  Subscription.PowerBi);             }             else                 throw new MessageAbandonException($\""Abandom message in AzureFunction.PowerBi\"");         }     } 

Thanks for the help!

""",azure-functions +55557524,"How do I Publish using Azure DevOps?

I am trying to setup CI/CD pipeline and publish process for .NET Core 2.1 sample project (it's default project that comes out of the box) using Azure DevOps. So far I have not alter the default code nor have added/removed any references in the project. It's building and running without any error locally.

I have created a simple build pipeline with tasks like Restore Build and Publish.

For some reason Publish fails with the following error.

##[section]Starting: Publish ============================================================================== Task         : .NET Core Description  : Build  test  package  or publish a dotnet application  or run a custom dotnet command. For package commands  supports NuGet.org and authenticated feeds like Package Management and MyGet. Version      : 2.149.0 Author       : Microsoft Corporation Help         : [More Information](https://go.microsoft.com/fwlink/?linkid=832194) ============================================================================== [command]C:\\windows\\system32\\chcp.com 65001 Active code page: 65001 [command]\C:\\Program Files\\dotnet\\dotnet.exe\"" publish \""D:\\a\\1\\s\\Devops Demo\\Devops Demo.csproj\"" --configuration release --output D:\\a\\1\\a\\Devops Demo Microsoft (R) Build Engine version 15.9.20+g88f5fadfbe for .NET Core Copyright (C) Microsoft Corporation. All rights reserved.  MSBUILD : error MSB1008: Only one project can be specified. Switch: Demo  For switch syntax  type \""MSBuild /help\"" ##[error]Error: C:\\Program Files\\dotnet\\dotnet.exe failed with return code: 1 ##[error]Dotnet command failed with non-zero exit code on the following projects : D:\\a\\1\\s\\Devops Demo\\Devops Demo.csproj ##[section]Finishing: Publish 

I have Googled and founded several people complaining about this error but no body has any definite solution for it.

So far I have not tried anything fancy in my project and CI/CD config. Above error is a blocker for me as I am unable to proceed with my simple devops setup.

Please let me know if you have any suggestion for me to fix this error.

My YAML is as below

pool:   name: Hosted VS2017 #Your build pipeline references an undefined variable named ‘Parameters.RestoreBuildProjects’. Create or edit the build pipeline for this YAML file  define the variable on the Variables tab. See https://go.microsoft.com/fwlink/?linkid=865972 #Your build pipeline references an undefined variable named ‘Parameters.RestoreBuildProjects’. Create or edit the build pipeline for this YAML file  define the variable on the Variables tab. See https://go.microsoft.com/fwlink/?linkid=865972 #Your build pipeline references the ‘BuildConfiguration’ variable  which you’ve selected to be settable at queue time. Create or edit the build pipeline for this YAML file  define the variable on the Variables tab  and then select the option to make it settable at queue time. See https://go.microsoft.com/fwlink/?linkid=865971 #Your build pipeline references the ‘BuildConfiguration’ variable  which you’ve selected to be settable at queue time. Create or edit the build pipeline for this YAML file  define the variable on the Variables tab  and then select the option to make it settable at queue time. See https://go.microsoft.com/fwlink/?linkid=865971  steps: - task: DotNetCoreCLI@2   displayName: Restore   inputs:     command: restore     projects: '$(Parameters.RestoreBuildProjects)'  - task: DotNetCoreCLI@2   displayName: Build   inputs:     projects: '$(Parameters.RestoreBuildProjects)'     arguments: '--configuration $(BuildConfiguration)'  - task: DotNetCoreCLI@2   displayName: Publish   inputs:     command: publish     publishWebProjects: True     arguments: '--configuration $(BuildConfiguration) --output $(build.artifactstagingdirectory)'     workingDirectory: 'Devops Demo'  - task: PublishBuildArtifacts@1   displayName: 'Publish Artifact'   inputs:     PathtoPublish: '$(build.artifactstagingdirectory)' 
""",azure-devops +31862825,"Is it useless to setup Azure VM \Availability Set\"" without setting up \""Load Balancing\""?

Let's say I have VM1 and VM2 using the service WS.cloudapp.com. Let's say I have an web app that has been depployed in both VM1 and VM2 in port 80. Because I'm not yet set up load balancing so for the port 80 only one VM can own let's say VM1. When VM1 is down end users also can not connect to WS.cloudapp.com. That lead to configuration high availability set is useless isn't it?

""",azure-virtual-machine +50340216,"ListBlobsSegmentedAsync doesn't return all blob directories

I have a hierarchy-structured blob container with around 12k blobs.

--level1

   --level21           --level211           --level212     --level22 

so currently I have two issues

  1. I can not see ListBlobs even though it occurs in a lot of articles.I know it is weird. but the compiler doesn't pass. https://i.stack.imgur.com/bVnrC.jpg I am using c# .netcore 1.1 and WindowsAzure.Storage 8.0 so it should not be version issue.

  2. so I am using ListBlobsSegmentedAsync for instance there are 80 sub-folders under level21 but this method only returns 10 of them. await blobs.ListBlobsSegmentedAsync(false BlobListingDetails.None 20000 null null null);

""",azure-storage +31513870,"Azure Website Logs Including Internal IPs in Entries

For the last couple of weeks we have been seeing an increasing amount of entries in the web logs of our Azure website whose originating IP address (in the c-ip column of the log) appears to be in the range 100.90.X.X. It has now reached more than half of all the traffic being logged and is interfering with our ability to perform analytics and threat detection.

According to the Wikipedia entry on reserved IP addresses this block is part of one \""Used for communications between a service provider and its subscribers when using a Carrier-grade NAT as specified by RFC 6598\"" so could this be a problem in Azure?

Looking at the logs the traffic comes from many different user agents (both normal users and the common legitimate bots) and is requesting a broad range of resources so does not immediately appear suspicious other than the IPs. It looks more like legitimate traffic is being given an incorrect (internal) IP.

It seems to be only affecting static content (e.g. images and XML files) but not ALL static content.

We are using a single Small Standard instance in Western Europe with a single web app running on it. We are not using any scaling features. There is a linked SQL database and the website runs primarily over HTTPs. 95%+ of our traffic comes from UK sources. We have not made any changes to logging which is handled by Azure.

Is there any way that we can return to seeing the actual IPs here or is this malicious traffic?

""",azure-web-app-service +57332314,"reading content of blob from azure function

I'm trying to read the content of a blob inside an azure function.

Here's the code:

Note: If I comment out the using block and return the blob i.e.

return new OkObjectResult(blob);

I get back the blob object.

However if I use the using block I get 500.

Any idea why I can't get the content?

string storageConnectionString = \myConnectionString\""; CloudStorageAccount storageAccount; CloudStorageAccount.TryParse(storageConnectionString  out storageAccount); CloudBlobClient cloudBlobClient = storageAccount.CreateCloudBlobClient(); CloudBlobContainer container = cloudBlobClient.GetContainerReference(\""drawcontainer\"");   var blob = drawingsContainer.GetBlockBlobReference(\""notes.txt\"");  using (StreamReader reader = new StreamReader(blob.OpenRead())) {     content = reader.ReadToEnd(); } return new OkObjectResult(content); 
""",azure-functions +57613198,"UrlHelper returning http links on Azure App Service

I have a service that when deployed on Azure App Services returns http links instead of https links when using UrlHelper. When testing on my development machine it returns https links as expected and the service is available and accessed through https requests.

An example of the type of route from my startup I'm trying to use is:

routes.MapRoute(     \""FooBar\""      \""api/Foo/{Id}/Bar\""); 

The link is then constructed using:

IUrlHelper _urlHelper = // Injected into class via service registration int id = 42; // Arbitrary value for example _urlHelper.Link(\""FooBar\""  new {Id = id}); 

When running on my local machine using Docker on Windows from Visual Studio I get a link of https://localhost:1234/api/Foo/42/Bar but on my deployed Linux Container App Service on Azure I get http://my-app-name.azurewebsites.net/api/Foo/42/Bar.

I don't know what I'm doing wrong to get an http link instead of an https link and would appreciate any advice/pointing in the right direction.

""",azure-web-app-service +41380707,Any way to upload file to azure virtual machine from external application

I have a windows application and I want to upload a file from this windows application to azure virtual machine data disk or os disk. I need this because there is some process(run by third party add-on) which work on azure virtual machine that process only able to read file from local drive. But that file will be generate through my windows application.

Is there any way to achieve this.

Thanks in advance!!

,azure-virtual-machine +50225208,Export from Azure Storage Queue to a CSV file

What is the easiest way to export data from Azure Queue to a CSV file without writing codes?

,azure-storage +40852776,Why is docker stats CPU Percentage greater than 100 times number of cores

I have an Azure VM with 2 cores. From my understanding the CPU % returned by docker stats can be greater than 100% if multiple cores are used. So this should max out at 200% for this VM. However I get results like this with CPU % greater than 1000%

CONTAINER           CPU %               MEM USAGE / LIMIT       MEM %               NET I/O               BLOCK I/O             PIDS 545d4c69028f        3.54%               94.39 MiB / 6.803 GiB   1.35%               3.36 MB / 1.442 MB    1.565 MB / 5.673 MB   6 008893e3f70c        625.00%             191.3 MiB / 6.803 GiB   2.75%               0 B / 0 B             0 B / 24.58 kB        35 f49c94dc4567        0.10%               46.85 MiB / 6.803 GiB   0.67%               2.614 MB / 5.01 MB    61.44 kB / 0 B        31 08415d81c355        0.00%               28.76 MiB / 6.803 GiB   0.41%               619.1 kB / 3.701 MB   0 B / 0 B             11 03f54d35a5f8        1.04%               136.5 MiB / 6.803 GiB   1.96%               83.94 MB / 7.721 MB   0 B / 0 B             22 f92faa7321d8        0.15%               19.29 MiB / 6.803 GiB   0.28%               552.5 kB / 758.6 kB   0 B / 2.798 MB        7 2f4a27cc3e44        0.07%               303.8 MiB / 6.803 GiB   4.36%               32.52 MB / 20.27 MB   2.195 MB / 0 B        11 ac96bc45044a        0.00%               19.34 MiB / 6.803 GiB   0.28%               37.28 kB / 12.76 kB   0 B / 3.633 MB        7 7c1a45e92f52        2.20%               356.9 MiB / 6.803 GiB   5.12%               86.36 MB / 156.2 MB   806.9 kB / 0 B        16 0bc4f319b721        14.98%              101.8 MiB / 6.803 GiB   1.46%               138.1 MB / 64.33 MB   0 B / 73.74 MB        75 66aa24598d27        2269.46%            1.269 GiB / 6.803 GiB   18.65%              1.102 GB / 256.4 MB   14.34 MB / 3.412 MB   50 

I can verify there are only two cores:

$ grep -c ^processor /proc/cpuinfo 2 

The output of lshw -short is also confusing to me:

H/W path      Device           Class      Description =====================================================                                system     Virtual Machine /0                             bus        Virtual Machine /0/0                           memory     64KiB BIOS /0/5                           processor  Intel(R) Xeon(R) CPU E5-2673 v3 @ 2.40GHz /0/6                           processor  Xeon (None) /0/7                           processor  (None) /0/8                           processor  (None) /0/9                           processor  (None) /0/a                           processor  (None) /0/b                           processor  (None) /0/c                           processor  (None) /0/d                           processor  (None) /0/e                           processor  (None) /0/f                           processor  (None) /0/10                          processor  (None) ... 

with well over 50 processors listed

,azure-virtual-machine +50807409,"Many 4 character storage containers being created in my storage account

I have an Azure storage account.

For a while now something has been creating 4 character empty containers as shown here there are hundreds of them:

\""enter

This storage account is used by:

  • Function Apps
  • Document Db (Cosmos)
  • Terraform State
  • Container Registry for Docker images

It's not a big deal but I don't want millions of empty containers being created by an unknown process.

Note1: I have looked for any way to find more statistics / history of these folders but I cant find any

Note2: We don't have any custom code that creates storage containers in our release pipelines (ie... PowerShell or CLI)

thanks Russ

""",azure-storage +13995734,Azure VMs Virtual Network inter-communication

I'm new to Azure (strike 1) and totally suck at networking (strike 2).

Nevertheless I've got two VMs up and running in the same virtual network; one will act as a web server and the other will act as a SQL database server.

While I can see that their internal IP addresses are both in the same network I'm unable to verify that the machines can communicate with each other and am sort of confused regarding the appropriate place to address this.

Microsoft's own documentation says

All virtual machines that you create in Windows Azure can automatically communicate using a private network channel with other virtual machines in the same cloud service or virtual network. However you need to add an endpoint to a machine for other resources on the Internet or other virtual networks to communicate with it. You can associate specific ports and a protocol to endpoints. Resources can connect to an endpoint by using a protocol of TCP or UDP. The TCP protocol includes HTTP and HTTPS communication.

So why can't the machines at least ping each other via internal IPs? Is it Windows Firewall getting in the way? I'm starting to wonder if I've chose the wrong approach for a simple web server/database server setup. Please forgive my ignorance. Any help would be greatly appreciated.

,azure-virtual-machine +54008309,"Azure functions local.settings.json represented in appsettings.json for a ServiceBusTrigger

I currently have an azure function using the ServiceBusTrigger binding

 [ServiceBusTrigger(\%TopicName%\""  \""%SubscripionName%\""  Connection = \""MyConnection\"")]          string  catclogueEventMsgs  ILogger log  ExecutionContext context) 

which uses this local.settings.json file

   \""Values\"": {              …     \""MyConnection\"": \""Endpoint=sb://testxxxxxxxxxxxxxxxxxx     \""SubscriptionName\"": \""testsubscriptionName\""     \""TopicName\"": \""testtopicName\""    } 

How do I represent this in the appsettings.json file. Will it be like the below?

   \""Values\"": {     \""MyConnection\"": \""Endpoint=sb://testxxxxxxxxxxxxxxxxxx     \""SubscriptionName\"": \""testsubscriptionName\""     \""TopicName\"": \""testtopicName\""    } 

Instead of using a “Values” object can I use eg “MySubs” object like the below?

   \""MySubs\"": {     \""MyConnection\"": \""Endpoint=sb://testxxxxxxxxxxxxxxxxxx     \""SubscriptionName\"": \""testsubscriptionName\""     \""TopicName\"": \""testtopicName\""    } 

If its possible to use the above settings how do I represent this in the ServiceBusTrigger binding? would i change it to this?

 [ServiceBusTrigger(\""%MySubs.TopicName%\""  \""%MySubs.SubscripionName%\""  Connection = \""MySubs.MyConnection\"")]          string  catclogueEventMsgs  ILogger log  ExecutionContext context) 
""",azure-functions diff --git a/4-Interpretibility/explain nlp models with lime & shap.ipynb.amltmp b/4-Interpretibility/explain nlp models with lime & shap.ipynb.amltmp deleted file mode 100644 index 303a161..0000000 --- a/4-Interpretibility/explain nlp models with lime & shap.ipynb.amltmp +++ /dev/null @@ -1,401 +0,0 @@ -{ - "cells": [ - { - "cell_type": "markdown", - "source": [ - "# Explain NLP models with LIME & SHAP\n", - "\n", - "ref: https://towardsdatascience.com/explain-nlp-models-with-lime-shap-5c5a9f84d59b" - ], - "metadata": {}, - "id": "4de51810" - }, - { - "cell_type": "code", - "source": [ - "%pip install nltk" - ], - "outputs": [], - "execution_count": null, - "metadata": {}, - "id": "f46eab10" - }, - { - "cell_type": "code", - "source": [ - "%pip install lime" - ], - "outputs": [], - "execution_count": null, - "metadata": { - "scrolled": true - }, - "id": "63927906" - }, - { - "cell_type": "code", - "source": [ - "%pip install lxml" - ], - "outputs": [], - "execution_count": null, - "metadata": {}, - "id": "85b3b7d3" - }, - { - "cell_type": "code", - "source": [ - "import pandas as pd\n", - "import numpy as np\n", - "import sklearn\n", - "import sklearn.ensemble\n", - "import sklearn.metrics\n", - "from sklearn.utils import shuffle\n", - "from __future__ import print_function\n", - "from io import StringIO\n", - "import re\n", - "from bs4 import BeautifulSoup\n", - "from nltk.corpus import stopwords\n", - "from sklearn.model_selection import train_test_split\n", - "from sklearn.feature_extraction.text import CountVectorizer\n", - "from sklearn.linear_model import LogisticRegression\n", - "from sklearn.metrics import accuracy_score, f1_score, precision_score, recall_score\n", - "import lime\n", - "from lime import lime_text\n", - "from lime.lime_text import LimeTextExplainer\n", - "from sklearn.pipeline import make_pipeline\n", - "\n", - "df = pd.read_csv('stack-overflow-data.csv')\n", - "df = df[pd.notnull(df['tags'])]\n", - "df = df.sample(frac=0.5, random_state=99).reset_index(drop=True)\n", - "df = shuffle(df, random_state=22)\n", - "df = df.reset_index(drop=True)\n", - "df['class_label'] = df['tags'].factorize()[0]\n", - "class_label_df = df[['tags', 'class_label']].drop_duplicates().sort_values('class_label')\n", - "label_to_id = dict(class_label_df.values)\n", - "id_to_label = dict(class_label_df[['class_label', 'tags']].values)\n", - "\n", - "REPLACE_BY_SPACE_RE = re.compile('[/(){}\\[\\]\\|@,;]')\n", - "BAD_SYMBOLS_RE = re.compile('[^0-9a-z #+_]')\n", - "# STOPWORDS = set(stopwords.words('english'))\n", - "\n", - "def clean_text(text):\n", - " \"\"\"\n", - " text: a string\n", - " \n", - " return: modified initial string\n", - " \"\"\"\n", - " text = BeautifulSoup(text, \"lxml\").text # HTML decoding. BeautifulSoup's text attribute will return a string stripped of any HTML tags and metadata.\n", - " text = text.lower() # lowercase text\n", - " text = REPLACE_BY_SPACE_RE.sub(' ', text) # replace REPLACE_BY_SPACE_RE symbols by space in text. substitute the matched string in REPLACE_BY_SPACE_RE with space.\n", - " text = BAD_SYMBOLS_RE.sub('', text) # remove symbols which are in BAD_SYMBOLS_RE from text. substitute the matched string in BAD_SYMBOLS_RE with nothing. \n", - "# text = ' '.join(word for word in text.split() if word not in STOPWORDS) # remove stopwors from text\n", - " return text\n", - " \n", - "df['post'] = df['post'].apply(clean_text)\n", - "\n", - "list_corpus = df[\"post\"].tolist()\n", - "list_labels = df[\"class_label\"].tolist()\n", - "X_train, X_test, y_train, y_test = train_test_split(list_corpus, list_labels, test_size=0.2, random_state=40)\n", - "vectorizer = CountVectorizer(analyzer='word',token_pattern=r'\\w{1,}', ngram_range=(1, 3), stop_words = 'english', binary=True)\n", - "train_vectors = vectorizer.fit_transform(X_train)\n", - "test_vectors = vectorizer.transform(X_test)" - ], - "outputs": [], - "execution_count": 2, - "metadata": {}, - "id": "79828f46" - }, - { - "cell_type": "code", - "source": [ - "logreg = LogisticRegression(n_jobs=1, C=1e5)\n", - "logreg.fit(train_vectors, y_train)\n", - "pred = logreg.predict(test_vectors)\n", - "accuracy = accuracy_score(y_test, pred)\n", - "precision = precision_score(y_test, pred, average='weighted')\n", - "recall = recall_score(y_test, pred, average='weighted')\n", - "f1 = f1_score(y_test, pred, average='weighted')\n", - "print(\"accuracy = %.3f, precision = %.3f, recall = %.3f, f1 = %.3f\" % (accuracy, precision, recall, f1))" - ], - "outputs": [], - "execution_count": null, - "metadata": {}, - "id": "a41b531f" - }, - { - "cell_type": "code", - "source": [ - "df.head(3) " - ], - "outputs": [ - { - "output_type": "execute_result", - "execution_count": 2, - "data": { - "text/html": "
\n\n\n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n
posttagsclass_label
0how do i move something in rails i m a progr...ruby-on-rails0
1c# how to output specific array searches t...c#1
2integerparseint and string format with decimal...java2
\n
", - "text/plain": " post tags \\\n0 how do i move something in rails i m a progr... ruby-on-rails \n1 c# how to output specific array searches t... c# \n2 integerparseint and string format with decimal... java \n\n class_label \n0 0 \n1 1 \n2 2 " - }, - "metadata": {} - } - ], - "execution_count": 2, - "metadata": {}, - "id": "95bfeb2d" - }, - { - "cell_type": "code", - "source": [ - "c = make_pipeline(vectorizer, logreg)\n", - "class_names=list(df.tags.unique())\n", - "explainer = LimeTextExplainer(class_names=class_names)\n", - "\n", - "idx = 1877\n", - "exp = explainer.explain_instance(X_test[idx], c.predict_proba, num_features=6, labels=[4, 8])\n", - "print('Document id: %d' % idx)\n", - "print('Predicted class =', class_names[logreg.predict(test_vectors[idx]).reshape(1,-1)[0,0]])\n", - "print('True class: %s' % class_names[y_test[idx]])" - ], - "outputs": [ - { - "output_type": "stream", - "name": "stdout", - "text": [ - "Document id: 1877\n", - "Predicted class = sql\n", - "True class: sql\n" - ] - } - ], - "execution_count": 3, - "metadata": {}, - "id": "194511ad" - }, - { - "cell_type": "code", - "source": [ - "print ('Explanation for class %s' % class_names[4])\n", - "print ('\\n'.join(map(str, exp.as_list(label=4))))\n", - "\n", - "print ()\n", - "print ()\n", - "print ('Explanation for class %s' % class_names[8])\n", - "print ('\\n'.join(map(str, exp.as_list(label=8))))" - ], - "outputs": [ - { - "output_type": "stream", - "name": "stdout", - "text": [ - "Explanation for class sql\n", - "('sql', 0.6404053612434537)\n", - "('date', 0.0914677082552682)\n", - "('query', 0.08881521149248939)\n", - "('execute', -0.047357727316833076)\n", - "('values', 0.033471191382502444)\n", - "('1', 0.03169582966262677)\n", - "\n", - "\n", - "Explanation for class python\n", - "('sql', -0.11959173268905181)\n", - "('range', 0.05426026720542654)\n", - "('query', -0.051683576182342116)\n", - "('date', -0.021911389057099367)\n", - "('d', 0.021685680133927004)\n", - "('1', 0.017820790332260344)\n" - ] - } - ], - "execution_count": 4, - "metadata": {}, - "id": "6ef4e1c2" - }, - { - "cell_type": "code", - "source": [ - "# exp = explainer.explain_instance(X_test[idx], c.predict_proba, num_features=6, top_labels=2)\n", - "print(exp.available_labels())" - ], - "outputs": [ - { - "output_type": "stream", - "name": "stdout", - "text": [ - "[4, 8]\n" - ] - } - ], - "execution_count": 5, - "metadata": {}, - "id": "60943e76" - }, - { - "cell_type": "code", - "source": [ - "exp.show_in_notebook(text=y_test[idx], labels=(4,))" - ], - "outputs": [ - { - "output_type": "display_data", - "data": { - "text/html": "\n \n \n
\n \n \n ", - "text/plain": "" - }, - "metadata": {} - } - ], - "execution_count": 6, - "metadata": {}, - "id": "08cfded1" - }, - { - "cell_type": "code", - "source": [ - "from sklearn.preprocessing import MultiLabelBinarizer\n", - "import tensorflow as tf\n", - "from tensorflow.keras.preprocessing import text\n", - "import keras.backend.tensorflow_backend as K\n", - "K.set_session\n", - "import shap\n", - "\n", - "tags_split = [tags.split(',') for tags in df['tags'].values]\n", - "tag_encoder = MultiLabelBinarizer()\n", - "tags_encoded = tag_encoder.fit_transform(tags_split)\n", - "num_tags = len(tags_encoded[0])\n", - "train_size = int(len(df) * .8)\n", - "\n", - "y_train = tags_encoded[: train_size]\n", - "y_test = tags_encoded[train_size:]\n", - "\n", - "class TextPreprocessor(object):\n", - " def __init__(self, vocab_size):\n", - " self._vocab_size = vocab_size\n", - " self._tokenizer = None\n", - " def create_tokenizer(self, text_list):\n", - " tokenizer = text.Tokenizer(num_words = self._vocab_size)\n", - " tokenizer.fit_on_texts(text_list)\n", - " self._tokenizer = tokenizer\n", - " def transform_text(self, text_list):\n", - " text_matrix = self._tokenizer.texts_to_matrix(text_list)\n", - " return text_matrix\n", - " \n", - "VOCAB_SIZE = 500\n", - "train_post = df['post'].values[: train_size]\n", - "test_post = df['post'].values[train_size: ]\n", - "processor = TextPreprocessor(VOCAB_SIZE)\n", - "processor.create_tokenizer(train_post)\n", - "X_train = processor.transform_text(train_post)\n", - "X_test = processor.transform_text(test_post)\n", - "\n", - "def create_model(vocab_size, num_tags):\n", - " model = tf.keras.models.Sequential()\n", - " model.add(tf.keras.layers.Dense(50, input_shape = (VOCAB_SIZE,), activation='relu'))\n", - " model.add(tf.keras.layers.Dense(25, activation='relu'))\n", - " model.add(tf.keras.layers.Dense(num_tags, activation='sigmoid'))\n", - " model.compile(loss = 'binary_crossentropy', optimizer='adam', metrics = ['accuracy'])\n", - " return model\n", - "model = create_model(VOCAB_SIZE, num_tags)\n", - "model.fit(X_train, y_train, epochs = 2, batch_size=128, validation_split=0.1)\n", - "print('Eval loss/accuracy:{}'.format(model.evaluate(X_test, y_test, batch_size = 128)))" - ], - "outputs": [ - { - "output_type": "stream", - "name": "stdout", - "text": [ - "Epoch 1/2\n", - "113/113 [==============================] - 0s 3ms/step - loss: 0.3060 - accuracy: 0.0953 - val_loss: 0.1997 - val_accuracy: 0.2975\n", - "Epoch 2/2\n", - "113/113 [==============================] - 0s 2ms/step - loss: 0.1694 - accuracy: 0.4654 - val_loss: 0.1419 - val_accuracy: 0.5875\n", - "32/32 [==============================] - 0s 1ms/step - loss: 0.1410 - accuracy: 0.5753\n", - "Eval loss/accuracy:[0.141045480966568, 0.5752500295639038]\n" - ] - } - ], - "execution_count": 3, - "metadata": {}, - "id": "a62c2723" - }, - { - "cell_type": "code", - "source": [ - "attrib_data = X_train[:200]\n", - "explainer = shap.DeepExplainer(model, attrib_data)\n", - "num_explanations = 20\n", - "shap_vals = explainer.shap_values(X_test[:num_explanations])\n", - "words = processor._tokenizer.word_index\n", - "word_lookup = list()\n", - "for i in words.keys():\n", - " word_lookup.append(i)\n", - "word_lookup = [''] + word_lookup\n", - "shap.summary_plot(shap_vals, feature_names=word_lookup, class_names=tag_encoder.classes_)" - ], - "outputs": [ - { - "output_type": "stream", - "name": "stdout", - "text": [ - "WARNING:tensorflow:From /anaconda/envs/azureml_py38/lib/python3.8/site-packages/shap/explainers/_deep/deep_tf.py:239: set_learning_phase (from tensorflow.python.keras.backend) is deprecated and will be removed after 2020-10-11.\n", - "Instructions for updating:\n", - "Simply pass a True/False value to the `training` argument of the `__call__` method of your layer or model.\n" - ] - }, - { - "output_type": "stream", - "name": "stderr", - "text": [ - "keras is no longer supported, please use tf.keras instead.\n" - ] - }, - { - "output_type": "display_data", - "data": { - "image/png": "iVBORw0KGgoAAAANSUhEUgAAAgkAAAI0CAYAAACXhrqLAAAABHNCSVQICAgIfAhkiAAAAAlwSFlzAAALEgAACxIB0t1+/AAAADh0RVh0U29mdHdhcmUAbWF0cGxvdGxpYiB2ZXJzaW9uMy4yLjEsIGh0dHA6Ly9tYXRwbG90bGliLm9yZy+j8jraAAAgAElEQVR4nOzdeXxVxf3/8ddNgmBISNghEBJEC6K1UUdrhCBFxKIEEYsBxARUrChQ/al8qWBBxK2AVYtb3SJrKy6ViKCismipOLXWohUEDQkENAESSZAlyf39cZOQhHuTm/UuvJ+PRx7cc87MnM/Jo/V+MjNnxuF0OhERERGpLsTXAYiIiIh/UpIgIiIibilJEBEREbeUJIiIiIhbShJERETErTBfB+BvMjIynMnJyb4OQ0REpLk4PF1QT4KIiIi4pSRBRERE3FKSICIiIm4pSRARERG3lCSIiIiIW0oSRERExC0lCSIiIuKWkgQRERFxS0mCiIiIuKUkQURERNxSkiAiIiJuKUkQERERt5QkiIiIiFtKEkRERMQtJQkiIiLilsPpdPo6Br/imF/su1/IEd/kbEmHShu9zQFFhxu9zerOLDjY4Dbi9+9vhEhq1jkvt171Ouza08iRuERk7WqSdj0JI7uRWspqpHaq2nfp+kZpZ2vv/EZpp7rPutSv3oaoxo2j3IpWfZqm4XKh/epeJ6QedTwpPa/x2gI42rNRm3NOatOo7ZVxeLqgngQRERFxS0mCiIiIuKUkQURERNxSkiAiIiJu+VWSYIzJNMaM83UcIiIi4mdJgoiIiPgPJQkiIiLiVpivA3CjhzHmfeCXQCZws7X2H8aYMOAeYDzQFvgM+J21dosxpj3wAxBrrc0xxgwC3gdutNa+WFZ3H3CZtXZz8z+SiIhI4PHHnoQbgKlAFPAe8HLZ+buBVOAKoAuwEXjPGNPGWrsP+A8wuKzsZcD2Sse/BEoB2xwPICIiEgz8MUl41lr7pbW2BHgeON0YEwVMAB6x1n5trT0CzAFKgCvL6q3leFIwGLgXuNQY4yg7/tBa2/hLC4qIiAQpf0wSKq9FW1T2byQQC3xXfqHsCz+z7Dy4koRLjTHtgJ8BrwF5wC9wJQlrmzRqERGRIOOPSYIn2UB8+YExJqTsuHxh+I1Ae+A2YKO19hiuxOBqXMMNShJERETqIJCShHRgmjHmZ8aYU4AZuCZergKw1v4E/AO4C9dcBnBNXrwd2Gut3dbsEYuIiAQwf3y7wZN5QEvgXVyTGj8Hhlhrf6xUZi3wK44nCeuAcFxDDyIiIlIHfpUkWGvjqx1nUnULy1llP57qPwg8WOn4R6BFowYpIiJykgik4QYRERFpRkoSRERExC2H0+n0dQx+JSMjw5mcnOzrMERERJqLw9MF9SSIiIiIW0oSRERExC0lCSIiIuKWkgQRERFxS0mCiIiIuKUkQURERNzSK5DVOOYXN84v5EjD8q+kQ423q/WAosP1rntmwcEG3Tt+//4G1a9N57zcJm2/w649tRfyQkTWrnrXDavYw6wusup9P0/2Xbq+QfW39s5vlDg+69IozQCwIcq7cita9am9UGi/ugcQUo86AKXn1a380Z71u48HzkltGrU98Tm9AikiIiJ1oyRBRERE3FKSICIiIm4FTZJgjFltjJnm6zhERESChV9tFd0Q1tqhvo5BREQkmARNT4KIiIg0rqDpSTDGrAPWAkuA74BU4PdALLAJSLPWNs77bCIiIieBYO5JSAEGAN2A1sAc34YjIiISWII5SbjPWptnrf0RWAYYXwckIiISSII5Sag8tFAERPoqEBERkUAUzEmCiIiINICSBBEREXFLSYKIiIi4FTSvQFprB1Y6dFS7lg6kN2M4IiIiAU89CSIiIuKWkgQRERFxy+F0On0dg1/JyMhwJicn+zoMERGR5uLwdEE9CSIiIuKWkgQRERFxS0mCiIiIuKUkQURERNxSkiAiIiJuKUkQERERt5QkiIiIiFtaJ6Eax/zihv9CjtQt90o6VFqv2wwoOlyvemcWHKxXvZrE79/f4DY65+U2QiTe6bBrT+2FykRk7apT22Fk1zGarDqV3nfp+jq277K1d36dyn/WpV638WhDlOdrK1r1qX/Dof3qX7eykHq0U3pezdeP9qxXKM5JbepVT6SetE6CiIiI1I2SBBEREXFLSYKIiIi4pSRBRERE3AqKJMEYs84YM9PXcYiIiASToEgSREREpPGF+TqAhjLGLASSgERjzHRgN3AWcA8wHmgLfAb8zlq7xVdxioiIBJqA70mw1k4GNgL3W2sjrLW9gbuBVOAKoEvZ9feMMXr5WERExEsBnyR4MAF4xFr7tbX2CDAHKAGu9G1YIiIigSNYk4RY4LvyA2ttKZBZdl5ERES8ECxJQvV1jbOB+PIDY0xI2XFd18sVERE5aQX8xMUye4HTKx2nA9OMMRtw9SD8H65nXdXskYmIiASoYEkS/gS8ZIzJx/V2QwLQEngXiAI+B4ZYa3/0XYgiIiKBJSiSBGvtp8DZ1U7PKvsRERGRegiWOQkiIiLSyJQkiIiIiFsOp9Pp6xj8SkZGhjM5OdnXYYiIiDQXh6cL6kkQERERt5QkiIiIiFtKEkRERMQtJQkiIiLilpIEERERcUtJgoiIiLilVyCrccwvrtsv5Igrz0o6VH2PqeMGFB1uUExnFhysc534/fvdnu+cl9ugWLzVYdcer8pFZO1q0H3Cat2zK6tB7buz79L1dSq/tXe+V+U+61KfaNzbENWw+ita9Wl4EKH9Gt6GJyH1aLv0PO/LHu1Z9/arcU5q0+A2RJqJXoEUERGRulGSICIiIm4pSRARERG3lCSIiIiIWwGXJBhj0o0xz/s6DhERkWDn90mCMWadMWamr+MQERE52fh9kiAiIiK+EebrAGpijFkIJAGJxpjpwG5gE9DSGPMcMAooAuZYa5+tVC8JeAjoCxwAngIetdZqUQgREREv+XVPgrV2MrARuN9aG2Gt7V126TdABtAOmAIsNMbEARhj+gJvA/OAjsCVwGTg+mYOX0REJKD5dU9CDT6w1q4s+/y6MSYfSAB2ArcCK6y1b5Zd/7qsRyIVWNT8oYqIiASmQE0Sqq/5WwREln3uCQwyxoysdD0Eal2/V0RERCoJhCTB86YI7u0EXrTW3tYUwYiIiJwsAiFJ2AucXofyTwHrjTFrgDWAE/gZ0NFaW7edeURERE5ifj1xscyfAGOMyTfGfFlbYWvtFmAYcDuuYYkfgHRckxhFRETES37fk2Ct/RQ4u5Yy8dWONwGXNmFYIiIiQS8QehJERETEBxxOp9YXqiwjI8OZnJzs6zBERESai8PTBfUkiIiIiFtKEkRERMQtJQkiIiLilpIEERERcUtJgoiIiLilJEFERETcUpIgIiIibmmdhGoc84sb9xdyxDd5WNKh2vfFGlB0uNHud2bBwXrXjd+/v9YynfNy692+tzrsqr656IkisnbVWiasxg1Hs+oQkcu+S2vecmRr73yv2vmsy4nnNkSdeG5Fqz5etVej0H4Nqx/SwPoApefVvc7Rnm5POye1aWAwIn5N6ySIiIhI3ShJEBEREbeUJIiIiIhbShJERETELSUJIiIi4paSBBEREXErzNcBNCVjzO+ASUA34ACwFJhprS3xaWAiIiIBIKiTBGAXMBTIBBKANWWfn/VdSCIiIoEhqJMEa+1rlQ7/bYxZDFyKkgQREZFaBXWSYIwZA/w/4DRcz3oK8E+fBiUiIhIggnbiojEmFlgCzAW6WmujgCepYflJEREROS5okwQgAtfz5QLHjDEXAdf7NiQREZHAEbRJgrX2f8As4E0gH5gOLPdpUCIiIgEkqOckWGvnAHN8HYeIiEggCtqeBBEREWkYJQkiIiLilsPpdPo6Br+SkZHhTE5O9nUYIiIizcXjW3/qSRAREfFg9uzZjBs3rt71b7nlFu6///5GjKh5BfXERRERaRjH/OImbd95V3B/DT3zzDO+DqFB1JMgIiLSBEpKAn8vQSUJIiISEB5++GF69epFZGQkffv25Y033gAgPT2d/v37c9ddd9G2bVt69uzJ6tWrK+p99913DBgwgMjISAYPHsxtt91WMYSwbt06unfvXuU+8fHxrF271m0Mo0aNokuXLkRFRTFgwAC+/PLLimvjx49n0qRJXHHFFbRu3ZoPP/yQ8ePHM3PmTADy8vIYNmwY0dHRtGvXjqSkJEpLSxv1d9TYlCSIiEhA6NWrFxs3bqSgoIBZs2Yxbtw49uzZA8Ann3xC7969ycvLY9q0adx4442UT8wfO3YsF154Ifv27WP27NksXry43jEMHTqUb775hh9++IHzzjuP6667rsr1ZcuWMWPGDA4ePEj//v2rXFuwYAHdu3cnNzeX77//ngcffBCHw793ClCSICIiAWHUqFHExMQQEhJCSkoKZ5xxBps3bwYgLi6OiRMnEhoaSlpaGnv27OH7778nKyuLTz/9lDlz5nDKKafQv39/hg8fXu8YbrjhBiIjI2nZsiWzZ8/mP//5DwUFBRXXr7rqKvr160dISAitWrWqUrdFixbs2bOHnTt30qJFC5KSkvw+SQjuGSP1MHzrUNhah4k6Rxo/z0o61LDupwFFh+tV78yCg16Xjd+/v173cKdzXm6D2+iwa08jRAIRWbtqLRNGdiPcKcvrkvsuXV+vO2ztnV/nOp918a7chqiqxyta9anzvU4Q2u/EcyFuzpUrPc+7do/2rF88ZZyT2jSovjSeRYsW8eijj5KZmQlAYWEheXl5hIaG0qXL8f/xhoeHV7nerl27inMAsbGxZGfX/f/HJSUlzJgxgxUrVpCbm0tIiOu//3l5eURFRVW07cndd9/N7NmzGTJkCAA333wz06dPr3MczUk9CSIi4vd27tzJxIkTWbhwIfv27SM/P5+zzz6b2tb66dq1K/v37+fQoUMV5yonCK1bt65yraSkhNxc93+4LFu2jDfffJO1a9dSUFBQkaxUjqGmnoHIyEgWLFjAt99+y8qVK3n00Ud5//33a4zf15QkiIiI3ysqKsLhcNCxY0cAXnrpJbZs2VJrvbi4OIwxzJ49m6NHj7Jp0yYyMjIqrv/sZz/j8OHDrFq1imPHjjF37lyOHDnitq2DBw/SsmVL2rdvz6FDh7jnnnvq9AxvvfUW27dvx+l0EhUVRWhoaEVvhL/ScIOIiHjkL+sY9O3blzvvvJPExERCQkJITU2lX78ahqMqWbp0KePHj6d9+/ZceOGFpKSkVLyeGBUVxVNPPcVNN91ESUkJ06ZNO+Fth3Kpqam88847dOvWjXbt2nH//ffz9NNPe/0M33zzDZMnTyY3N5e2bdty66238qtf/crr+r6gZZmrccwvrtsvRHMSGkxzEmqmOQkeaE6C1FNKSgp9+vThvvvua/J7paamcvrpp/OHP/yhye/VACfHsszGmNXGmGm+jkNERPzHp59+yo4dOygtLWXNmjW8+eabjBgxosnvW1xczNatW+nZs2GJqi/5Rz9SI7HWDvV1DCIi4l/27t3LyJEj2bdvH927d+fpp5/m3HPPbfL7dunShfPPP59rrrmmye/VVIIqSRAREakuOTkZX+zum5eX1+z3bGxBlSQYY9YBa4F5wJ+BEUAr4HvgHmvtCt9FJyIiEliCak5CJWnABcCZ1to2wCDgy5qriIiISGVB1ZNQyVEgAuhrjNlkrW2M6egiIiInlWDtSVgCPA/8CdhnjHndGHO6j2MSEREJKEGZJFhri621j1hrDRAHHAJe9HFYIiIiASUohxuMMYOAAuAL4CegCCjxaVAiItIk1q1bx7hx49i1q/bF0KRugjJJADoDC4EeuOYnbAZu9mlEIiIByPFAw1aArY1zRlB2aAeNoEoSrLUDKx0u91UcIiIiwUApnIiIBIT4+Hgeeugh+vbtS9u2bZkwYQKHDx/fq2bBggV06tSJrl278tJLL1WcHz9+PLfccguXXXYZkZGRXHLJJezcudMXjxBwlCSIiEjAWLp0Ke+88w47duxg27ZtzJ07F3AtvVxQUMDu3bt54YUXuO222zhw4ECVevfeey95eXkkJCRw3XXX+eoRAop2gawmIyPD6YvlO0VE/JE/zUmIj49n+vTp3HLLLQC8/fbbTJkyhRdeeIGhQ4dy8OBBwsJco+idOnVi5cqVXHTRRYwfP57Dhw/z17/+FYDCwkKioqLIzMwkNja28R8q8Jwcu0CKiEhwq/ylHhcXR05ODgDt27evSBAAwsPDKSwsdFsvIiKCdu3aVdQVz5QkiIhIwMjOPr6AblZWFjExMXWuV1hYyP79+72uezJTkiAiIgHjySefZNeuXezfv58HHniAlJQUr+q9/fbbfPTRRxw9epR7772Xiy66SEMNXgiqVyBFRKRx+ds6BmPHjmXIkCHk5ORw1VVXMXPmTDZv3uxVvfvuu49NmzZx3nnnsWTJkmaINvApSRARkYBxwQUX8Pvf/77KuYEDB56w2mJmZmaV4w4dOvDMM880dXhBx79SRBEREfEb6kmoZvjWobC12NdhnOhICEmH6vYq0oCiw1WOzyw4WO/bx+/f73XZznm59bpHh1176lXPGxFZjbOmexjudh3PqrHOvkvXN8q9Abb2zm9wG5918XxtQ9SJ51a06uO+cGg/724Y4mW56krPO/Hc0Z71aso5qU39YhA5ySlJEBGRgFB9CMFb6enpjRrHyUTDDSIiIuKWkgQRERFxK6CHG4wx3YFsoCcwFki01mpNZRERkUYQ0ElCZdbaB30dg4iISDDRcIOIiIi4FVBJgjGmizFmpTGmwBizDfh1pWuzjTFryz7fZoz5vFrdnsaYEmNMfPNGLSIijeGss85i3bp1vg7jpBJoww1LgR+BHsCpwKseyi0DFhhjEqy15cnCeGCdtTazqYMUEQkWA2Y07boxGx7w/mvoyy+/bMJIxJ2A6UkwxnQDBgF3WWsLrLV7gfvclbXWHgDeBCaU1XUAacCLzRSuiIhIwAuYJAHoXvbvzkrnvquh/EvAWGNMC1zJRTTwehPFJiIiTSw+Pp61a9eyefNmEhMTiY6OpmvXrkyePJmjR48CMGnSJO66664q9a666ioeffRRAB5++GF69epFZGQkffv25Y033mj25wgkgZQk7C77N67Sufgayr8HHAGScQ01/NVa+1OTRCYiIs0mNDSUP/3pT+Tl5bFp0ybef/99nnrqKQDGjBnD3/72N5xOJwAHDhzg3XffZfTo0QD06tWLjRs3UlBQwKxZsxg3bhx79jTdkvCBLmCSBGvtLmAd8EdjTBtjTGfgDzWULwEWAVOBkWioQUQkKJx//vlcdNFFhIWFER8fz29/+1vWr3ftkZKUlITD4WDjxo0AvPrqqyQmJhITEwPAqFGjiImJISQkhJSUFM444wyvtpo+WQVMklBmLNAS1wJKG3ElATV5CbgE+M5aq/8ViIgEgW3btjFs2DC6dOlCmzZtuOeee8jLywPA4XAwevRoli9fDsCyZcu47rrrKuouWrSIhIQEoqOjiY6OZsuWLRV15UQB9XaDtXYPMKza6efL/p3tpvw3gKOJwxIRkWY0adIkzj33XJYvX05kZCSPPfYYr756/GW3MWPGMGTIEKZPn84nn3xSMe9g586dTJw4kffff5/ExERCQ0NJSEioGJqQEwVaT4KIiJzkDh48SJs2bYiIiODrr7/m6aefrnL93HPPpUOHDtx0001cfvnlREdHA1BUVITD4aBjx44AvPTSS2zZsqXZ4w8kAdWTICIizasu6xg0l/nz53PzzTfzxz/+kXPPPZeUlBQ++OCDKmXGjh3LH/7wB1555ZWKc3379uXOO+8kMTGRkJAQUlNT6devX3OHH1Ac6mapKiMjw5mcrD2iRET8TY8ePViyZAkDBgzwdSjBxuOwvIYbRETE7+Xm5pKbm0t8fLyvQzmpKEkQERG/9umnn3LGGWcwZcoUevTo4etwTir+N9gkIiJSyQUXXEB+fr6vwzgpqSdBRERE3FKSICIiIm4pSRARERG3lCSIiIiIW1onoRrH/OKG/0KO1Jx7JR0q9XhtQNHhOt3qzIKDdSpfLn7//hqvd87LrVe7AB12eb+jWkTWrjq1HUZ2LSWy6tTevkvXuz2/tXfdJkl91qVOxdkQ5V25Fa361F4otA6LwYR4KFt6nuc6R3t61bRzUhvv4xARf6J1EkRE5OSxbt06unfv7uswGsXSpUsZMmRIxbHD4WD79u3Ncm+9AikiIh7NvL2wSduf+1hEk7YfDK677roqO1k2J/UkiIhIwCkuLvZ1CI3C358jIJMEY8xAY4x//2ZFRKRRxcfH88gjj3DOOefQunXrE7rdx48fz8yZM6vUefDBB+nQoQPx8fEsXboUcK3g2LlzZ0pKSirKvf766/ziF7/weO+VK1dy1llnER0dzcCBA/nf//5XJa758+dzzjnnEBUVRUpKCocPe55fVv05iouLefjhh+nVqxeRkZH07du3YntrgPT0dPr37++2rbfffpu+ffsSGRlJt27dmD9/vsf71kdAJgkiInJyWr58OatWrfJqBca9e/eSl5fH7t27efnll7n55pvZunUrF1xwAe3bt+fdd9+tKLt48WJSU1PdtrNt2zbGjBnDY489Rm5uLldccQXJyckcPXq0oswrr7zCmjVr+O677/jiiy9IT0/3+jnCwsLo1asXGzdupKCggFmzZjFu3Dj27Kl9EviNN97Is88+y8GDB9myZQuDBg2qtU5dKEkQEZGAMXXqVGJjYzn11FO9Kn///ffTsmVLLrnkEq688sqKraPT0tJYsmQJAPv37+edd95h7Nixbtv429/+xpVXXslll11GixYtuOuuu/jpp5/4xz/+USWumJgY2rVrR3JyMp9//nmdnmPUqFHExMQQEhJCSkoKZ5xxBps3b671+Vq0aMFXX33Fjz/+SNu2bTnvvBreVKoHv0oSjDEjjTHbKh3PMcY4jTGnlR1faIwpoGzCpTEmxRizwxhTYIx5xRgTWXb+EWPMm9XaHmSM+dEY07oZH0lERBpRbGys12Xbtm1L69bH/5MfFxdHTk4OAOPGjSMjI4OioiJeeeUVkpKS6Nq1Kxs3biQiIoKIiAjOOussAHJycoiLi6toJyQkhNjYWHbv3l1xrkuX4+9Bh4eHU1jomvA5dOjQivbKhzvcPceiRYtISEggOjqa6OhotmzZQl5eXq3P+Nprr/H2228TFxfHJZdcwqZNm7z+/XjDr5IE4APgNGNM+TZflwHbgcGVjtcDxUAoMAT4BfAz4Fxgalm5vwBDjTFdK7V9E7DMWlvUpE8gIiJNxuE4/kp/eHg4hw4dqjjeu3dvlbIHDhygqOj4f/KzsrKIiYkBoFu3biQmJvL666+zePFirr/+egCSkpIoLCyksLCQL7/8EoCYmBh27txZ0Y7T6SQ7O5tu3brVGu/q1asr2qv8hkLl59i5cycTJ05k4cKF7Nu3j/z8fM4++2y8Wcfoggsu4M033+SHH35gxIgRXHvttbXWqQu/ShKstfnAZ8BgY0wb4CzgAVzJAbiShbWVqky31hZaa78H/g6YsnZ2ABuANABjTFvgauC55ngOERFpegkJCSxbtoySkhLWrFnD+vUnLo42a9Ysjh49ysaNG3nrrbcYNWpUxbXU1FT++Mc/8t///peRI0d6vM+1117LqlWreP/99zl27BgLFiygZcuWXHzxxY3yHEVFRTgcDjp27AjASy+9xJYtW2qtd/ToUZYuXUpBQQEtWrSgTZs2hIQ07te6P66TsBZXMrAP2AS8Dcw3xkQAicBtQCegxFpbeVnAIiCy0vGzuBKMh4FxwP+stf9q+vBFRIKHP69j8Pjjj5OWlsaTTz7JiBEjGDFiRJXrXbp0oW3btsTExBAeHs4zzzxDnz7HVzG9+uqrmTRpEldffTXh4eEe79O7d2+WLFnClClT2L17NwkJCWRkZHDKKac0ynP07duXO++8k8TEREJCQkhNTaVfP+9WUl28eDGTJ0+mpKSE3r17VxnSaAx+tyyzMWYQsAx4Fci01s43xvwHWAFMstZ2M8YMBNZaa8Mq1ZsN9LfWDi47bgFkAynA48Cz1tqna7u/lmV20bLMWpa5gpZlliDWq1cvnn32WQYPHlx74eAVUMsyfwy0Aa4H3is79z5wd9m/XrHWHgPSgT8BZ+BKPERERADXpD+Hw9Horw0GE79LEqy1R4CPgMPAF2Wn1+JKHNZ6qufBc0AC8Iq1tqDRghQRkYA2cOBAJk2axJNPPtno4/jBxB/nJGCtHVLt+G0qdYdYa9dRLXZr7Ww3Te0FfkITFkVEpJJ169b5OoSAELTpkzHGAdwOfGWt/Udt5UVERKQqv+xJaChjTCfgW+AHYFQtxUVERMQNv3u7wdcyMjKcycnJvg5DRESkuQTU2w0iIiLiB5QkiIiIiFtKEkREJCCcddZZeiuhmQXlxEUREWkcSyd4v4JqfVz3UtfaC5Up33BJmo96EkRERMQtJQkiIhIQ4uPjWbt2LUeOHOH2228nJiaGmJgYbr/9do4cOQJAXl4ew4YNIzo6mnbt2pGUlERpqef9cqRmGm6oZvjWobC1uOZCtWzgVBc1bfZUV3XdHKpcXTeJqm1zqOo8bRZVl42gKvO0KVTtmz/VhfuNojxtCOVJTRtF1bYplKdNoGrc9MndZk+eNnXypKbNnsDthk/a3Ema0wMPPMA///lPPv/8cxwOB1dddRVz587l/vvvZ8GCBXTv3p3cXNd/d/75z3/icHh8w09qoZ4EEREJKEuXLuUPf/gDnTp1omPHjsyaNYvFixcD0KJFC/bs2cPOnTtp0aIFSUlJShIaQEmCiIgElJycHOLi4iqO4+LiyMnJAeDuu+/m9NNPZ8iQIZx22mk8/PDDvgozKChJEBGRgBITE8POnTsrjrOysoiJiQEgMjKSBQsW8O2337Jy5UoeffRR3n//fV+FGvCUJIiISEAZM2YMc+fOJTc3l7y8PObMmcO4ceMAeOutt9i+fTtOp5OoqChCQ0O1FXQD+P3ERWNMJjDTWrvE17GIiJxs6rKOQXOZOXMmP/74I+eccw4Ao0aNYubMmQB88803TJ48mdzcXOKmzf8AACAASURBVNq2bcutt97Kr371K1+GG9D8PkloCGPMQGCttTaon1NE5GSQmZlZ8fmJJ57giSeeOKHMHXfcwR133NGMUQU39cGIiIiIW4HyF/ZpxpiPgATga2CStfZTY0w6UGytvam8YPnwBPABsBoINcYUll2+zVr7crNGLiIiEqACJUm4BUgG/gv8P+BtY0yvmipYa3OMMUNxDTdENEOMIiIiQSVQhhtesNb+y1p7FHgE+AkY5uOYREREglqgJAmZ5R+stU5ca+Z291k0IiIiJ4FASRLiyz8YYxxAD2AXcBBoXelaGNCpUj3t6iEiIlJPgTIn4QZjzBu45iTcAYQDq4BTgD8aY3oCOcAcoEWlentxTVzsaa39rpljFhERCWiB0pPwF+AJ4ACQAlxprS0AlgIrgc+AHbiGIXaXV7LWbgOeBjYbY/KNMdc3d+AiIiKByu97Eqy18WUf73Nz7RhwU9lPuSerlbkVuLWp4hMREQlWfp8kiIiI73x81ZdN2n6/N89q0valYZQkVLOy92qSk5Ob8Y6NOeJT3+Ug6lrP/9Zyby7t61j+4npeA5hcx3uJiDS2QJmTICIiJ7ns7GxGjhxJx44dad++PZMnK5VuakoSRETE75WUlDBs2DDi4uLIzMxk9+7djB492tdhBT0NN4iIiN/bvHkzOTk5zJs3j7Aw11dX//79fRxV8FNPgoiI+L3s7Gzi4uIqEgRpHkoSRETE78XGxpKVlUVxcbGvQzmpKEkQERG/d+GFF9K1a1emT59OUVERhw8f5uOPP/Z1WEFP/TYiIuKRv6xjEBoaSkZGBlOnTqVHjx44HA7Gjh1Lv379fB1aUHM4nU5fx+BXHPOLG/4LOdJ4HTRJhxpnj6oBRYe9KndmwcEqx/H7959QpnNebq3tdNi1x7vA6ikiaxcAYWQ3sKUsj1f2Xbq+Ti1t7Z3vddnHetepaQBWtOrj+hDq4T+KIdXOl553/PPRnnW/IeCc1KZe9UQkoDg8XdBwg4iIiLilJEFERETcUpIgIiIibgVMkmCMSTfGPF/D9SRjjPeDwiIiIlIjv327wRizDlhrrZ3rTXlr7UYgukmDEhEROYkETE+CiIiINC+/7EkwxiwEkoBEY8x0YDewCWhpjHkOGAUUAXOstc+W1RmIq+chrOx4MDAP6AUcBT631g5u7mcREREJVH7Zk2CtnQxsBO631kZYa8vfKv8NkAG0A6YAC40xcR6aWQQ8AUQB3QCvhi1ERETExS+ThBp8YK1daa0ttda+DuQDCR7KHsXVi9DZWnvEWruuuYIUERH/NXv2bMaNG1evullZWURERFBSUtLobfsjvxxuqEH1ZfyKgEgPZa8C7gH+a4zJBf5irX2sKYMTEQk22/uta9L2T/94YJO239h69OhBYWGhr8NoNv7ck9Cg9Yittf+x1qYAnYDfAg8ZYwY1SmQiIhKUtMtkVf6cJOwFTq9PRWPMKcaYNGNMB2utEziAK+lw3z8kIiJ+7+GHH6ZXr15ERkbSt29f3njjDQDS09Pp378/d911F23btqVnz56sXr26ot53333HJZdcQmRkJJdddhl5eXkV1zIzM3E4HLzwwgv06NGDQYMGUVpayty5c4mLi6NTp06kpqZSUFBQpXx5MlFT28HAn5OEPwHGGJNvjPmyHvVTgK+NMYXASmCWtbZuO/aIiIjf6NWrFxs3bqSgoIBZs2Yxbtw49uxxjUJ/8skn9O7dm7y8PKZNm8aNN95I+QaGY8eO5fzzzycvL497772Xl19++YS2169fz//+9z/eeecd0tPTSU9P58MPP+Tbb7+lsLCQyZMnu43Jm7YDmXaBrEa7QGoXyHLaBVK7QIp/z0lISEjgvvvu48CBA8ydO5ft27cDcOjQIVq3bs2ePXs4evQop512GgUFBbRu3RpwfbGHhISwZMkSMjMz6dmzJzt27OC0004D4NJLL+Waa67h1ltvBWDr1q2cffbZ/PTTT+zatYuePXty7NgxcnJyamw7gGgXSBERCWyLFi0iISGB6OhooqOj2bJlS0X3fpcuXSrKhYeHA1BYWEhOTg5t27at+BIHiIs78c352NjYis85OTlVysTFxVFcXMz3339fpY63bQcyJQkiIuL3du7cycSJE1m4cCH79u0jPz+fs88+m9p6w7t27cqBAwcoKiqqOJeVdWIPosNx/I/pmJgYdu7cWaV8WFgYnTt3rlfbgSzQXoFscit7ryY5OdnXYVTSWHlcRD3LdW2k+wee9nUsf3ETlRURKCoqwuFw0LFjRwBeeukltmzZUmu9uLg4jDHMmjWLBx98kM2bN5ORkcHw4cM91hkzZgyPPPIIQ4cOpWPHjtxzzz2kpKQQFlb1K7M+bQcaJQkiIuKRv6xj0LdvX+68804SExMJCQkhNTWVfv08zM+pZtmyZaSlpdGuXTsSExNJTU0lP9/zHKIbbriBnJwcBgwYwOHDh7n88sv585//3ChtBxpNXKwmIyPD6V89CSIiIk1KExdFRESkbpQkiIiIiFtKEkRERMQtJQkiIiLiliYuVtMoKy5ClVUX67NqYuUVEquvguhO+cqIDV0NsXwlw+ZS9xUTXe8g12U1RG9XQvysi/vzG6Lcn69YAbG+3K2cWNOqiZV5sYKiVksUES9p4qKIiIjUjZIEERERcUtJgoiIiLilJEFERAJCfHw8a9eubZK2Bw4cyPPPP98kbQeyWpdlNsa0A5YDFwHbrbXnN3lUVe+fBGRYa6Ob874iIgL5ccubtP3onWMa3Ma6desYN24cu3Y178Trk4E3ezfcgmvXn/bW2uKmDMYYMxvob60dXH7OWrsRUIIgIiLSzLwZbjgN+F9TJwgiIiK1+fzzzznnnHOIiooiJSWFoqIihg4dSk5ODhEREURERJCTk8Ps2bMZNWoU48aNIzIykp///Ods27aNhx56iE6dOhEbG8u7777r68fxezX2JBhjMoBfl30eDfwL6GetDatUZjaV/vo3xjiB24AJQB/gS2C8tfbrsustgLuBNCAG+AH4PyAUuAcIMcYUljV/DtADWFt+T2NMWFm58UBb4DPgd9baLWXX08vaOgyMAoqAOdbaZ+vx+xERET/yyiuvsGbNGlq1akW/fv1YvHgxq1evdjvckJGRwZtvvkl6ejo33HADl19+OTfddBO7d+8mPT2d3/72t3z33Xc+epLAUGNPgrU2GVgKvGytjQBmednueOAaoAOQDVTeY3MuMA7XF3gb4BJgm7X2b8CDwDprbUTZz7du2r4bSAWuALoAG4H3jDGVV475DZABtAOmAAuNMXFexi4iIn5q6tSpxMTE0K5dO5KTk/n88889lk1KSuLyyy8nLCyMUaNGkZuby/Tp02nRogWjR48mMzMzqLZ1bgrezEmoj3nW2iyo+Mt+SdlnB65ehhRr7RdlZXeV/XhrAvBIpZ6JOcBNwJW4JlgCfGCtXVn2+XVjTD6QAOys9xOJiIjPdelyfGnU8PBwcnJyPJbt3LlzxedTTz2VDh06EBoaWnEMUFhYSHS0pr150lSvQFZe97cIiCz73BFoDWxrQNuxQEX/kLW2FMgsO+/u/tVjEBGRIOJweFxVWBqorknCQSDUGNOy0rmYOtTPBQ4BZ3i47s0mB9lAfPmBMSak7LiumwCIiEgQ6Ny5M/v27aOgoMDXoQSdug43bAMKgZuMMU8DF+Ma///Mm8rWWqcx5ingj8aYLFyTGrsB7cqGH/YCPYwxp1hrj3poJh2YZozZgKsH4f/KnmNVHZ9FRERq0RjrGDS1Pn36MGbMGE477TRKSkr46quvfB1S0KhTkmCtPWiMmQD8EXgYWAO8DPy8Ds3MwNUj8XdcEw/3AtOAL4AVQAqwt6yH4Fw39ecBLYF3gSjgc2CItfbHujyLiIgElszMzCrHs2fPrvj84osverwGMHjw4Cr1w8LCqLwL8rp16xopyuCiraKr0VbR2iq6Om0VLSJBTltFi4iISN2oJ6GajIwMZ3Jysq/DEBERaS7qSRAREZG6UZIgIiIibilJEBEREbeUJIiIiIhbShJERETELSUJIiISEM466ywtetTMmmoXSBERCQLFjnlN2n6Y826vy3755ZdNGIm4oyShmuFbh8LW4toLHml4J0xdV2KsvAqjN7xZqbEhyld5dKehKz+Wq20FyBNXbMyqtU13qzVWXpXR08qL3qi8OqPHFRnLV1r0dnXFcmWrLGolRZHmUVxcTFjYyf01qeEGEREJCPHx8axdu5affvqJ8ePH07ZtW/r27cu8efPo3r17RTmHw8H27dsrjsePH8/MmTMrjt966y0SEhKIjo7m4osv5osvvqhyj0ceeYRzzjmH1q1bM2/ePK655poqcUydOpXf/e53Tfik/uPkTpFERCTg3HfffezYsYMdO3ZQVFTE0KFDva7773//mxtuuIGMjAyMMSxZsoThw4ezdetWWrZsCcDy5ctZtWoVHTp0ID8/n9mzZ5Ofn090dDTFxcX89a9/ZfXq1U31eH5FPQkiIhJQXnnlFWbMmEG7du2IjY1l6tSpXtf9y1/+wm9/+1t++ctfEhoaSlpaGi1btuSf//xnRZmpU6cSGxvLqaeeSteuXRkwYAArVqwAYM2aNXTo0IHzzz+/0Z/LHwVFkmCMKTTGJNZw/XljTHozhiQiIk0kJyeH2NjYiuO4uDiv6+7cuZMFCxYQHR1d8ZOdnU1OTk5FmcptA6SlpbFkyRIAlixZwvXXX9/AJwgcQZEkWGsjrLWbfB2HiIg0va5du5KdfXzSclZW1QnL4eHhHDp0qOJ47969FZ9jY2OZMWMG+fn5FT+HDh1izJgxFWUcjqr7HY0YMYIvvviCLVu28NZbb3Hdddc19iP5raBIEkRE5ORx7bXX8tBDD3HgwAF27drFn//85yrXExISWLZsGSUlJaxZs4b164+/0TRx4kSeeeYZPvnkE5xOJ0VFRaxatYqDBz2/DdaqVSt+85vfMHbsWC688EJ69OjRZM/mb/xi4qIx5jZgorU2odK5nsB2oBfQBngMOBc4ALwIPGStLSkr6wSSrLUflR3fAMwAOgJv4toG04v3GkVEpLK6rGPQXGbNmsUtt9xCz549iYmJYcKECTz++OMV1x9//HHS0tJ48sknGTFiBCNGjKi4ZozhueeeY/LkyXzzzTeceuqp9O/fnwEDBtR4z7S0NJ5//nlefPHFJnsuf+QXSQKwDFhgjEmw1n5edm48sA5XUvAJsBAYCpwGrAKOACes8mGMSQKeBIYDHwJjgReApU36BCIi0qRKS0s55ZRTCA8PZ9GiRRXnq6/CaIypceGlX//61/z61792ey0zM9Pt+R49enDqqaee8DpksPOL4QZr7QFcf/FPADDGOIA0XD0GVwJHgbnW2iPW2v8BjwA3eWguFXjVWvuetbbYWrsI2NzUzyAiIk0nNzeX3Nxc4uPjm/3epaWlPProo4wePZo2bU6uxcz8Ikko8xIw1hjTAhgERAOvA7HATmuts1LZHWXn3ekOZFY7913jhioiIs3l008/5YwzzmDKlCnNPh+gqKiINm3a8N5773Hfffc16739gb8MNwC8h2sIIRm4GvirtfYnY0w2EGeMcVRKFE6DE9bjLbcbiK92Lh7X/AYREQkwF1xwAfn5+R6vDxw4kF27al7Cvb5at25NYWFhk7QdCPwmSbDWlhhjFgFTgQuAX5VdWoVr0uI9xph5QE/g/4BnPTS1GFhTti7CemA08EuUJIiIiNSJPw03gGvI4RLgO2vtZgBrbQEwBBgMfA+8AywCHnXXgLV2PTAFeB7YD/wa+FuTRy4iIhJk/KYnAcBa+w2u1xWrn/+c4z0L7uo5qh0/jytJEBERkXryt54EERER8RNKEkRERMQth9PprL3USSQjI8OZnJzs6zBERESaywnD/OX8ak6CiIj4GYf32zDXi/OJpm1fGkTDDSIiEjSq7+DY0HInOyUJIiISELKzsxk5ciQdO3akffv2TJ482dchBT0lCSIi4vdKSkoYNmwYcXFxZGZmsnv3bkaPHu3rsIKe5iSIiIjf27x5Mzk5OcybN4+wMNdXV//+/X0cVfBTT4KIiPi97Oxs4uLiKhKEch999BHR0dEVP0CV448++qhO5aQqvQJZjWN+8fFfyJHAyKGSDpVWfB5QdNhtmTMLDnrVVvz+/Sec65yXW2OdDrv2ABCRVfMGK2Ee9+TKqvi079L1Hutv7e15g5fqPuvidVEANkR5V25Fqz5VT4T2c18wxMN5gNLzjn8+2tNtEeekk2s7WvFjfvJ2w6ZNm7jqqqvIyck5IVGozOFw4M33mrflThIeZ3EGxregiIic1C688EK6du3K9OnTKSoq4vDhw3z88ce+DivoaU6CiIh45ifrGISGhpKRkcHUqVPp0aMHDoeDsWPH0q9fDb120mBKEkREJCD06NGDv//97zWW8XYIQUMN3tFwg4iIiLgVcEmCMSbdGKNtoEVERJpYwCUJIiIi0jyUJIiIiIhbfjlx0RiTCbwIDAESgK+BSdbaT8uKtDTGPAeMAoqAOdbaZ8vqjgdmAs8BtwOhwGJgurX2WDM+hoiISEDz556EW4DfAe2AV4G3jTHlK8z8BsgouzYFWGiMiatUNw7oAZwGJALJwN3NFLeIiEhQ8Ock4QVr7b+stUeBR4CfgGFl1z6w1q601pZaa18H8nH1OJQrBe621v5krd0B/BEY34yxi4iIBDx/ThIyyz9Ya5241u7tXnZqT7WyRUBkpeMfrLWHqrXVHREREfGaPycJ8eUfjDEOXMMHNW8OcFwnY0x4tba8rSsiInICh8PB9u3bfR1Gs/LLiYtlbjDGvAH8F7gDCAdW4ZrMWJsQ4BFjzDSgK3AX8HJTBSoiErQcI5q2fWfNKyiKb/lzT8JfgCeAA0AKcKW1tsDLujtx9Rx8B3wCrME1L0FERES85M9Jwg5rbX9rbYS19nxr7ScA1trx1tqbKhe01sZba5dUO/eItbaLtbajtfZ2vf4oIhLY4uPjmTdvHueccw6tW7fmxhtv5Pvvv2fo0KFERkYyePBgDhw4wJVXXsmf//znKnXPOecc3njjDZxOJ3fccQedOnWiTZs2/PznP2fLli0A7Nu3j+HDh9OmTRsuvPBC7r33Xvr37++LR/Ub/pwkiIiIVPHaa6/x3nvvsW3bNjIyMhg6dCgPPvggubm5lJaW8sQTT5CWlsaSJcf/bvzPf/7D7t27ufLKK3n33XfZsGED27Zto6CggFdeeYX27dsDcNttt9GqVSv27NnDiy++yIsvvuirx/QbShJERCRgTJkyhc6dO9OtWzeSkpL45S9/ybnnnkurVq24+uqr+fe//83w4cPZtm0b33zzDQCLFy8mJSWFU045hRYtWnDw4EG+/vprnE4nZ555Jl27dqWkpITXXnuNOXPm0Lp1a84++2zS0tJ8/LS+55cTF6218Q2omw6k17f+yt6rSU5Orm91H6mc60V4KOPpfHVdGxhLw7Sv4drFdWinLmUBJtexvIj4RufOnSs+n3rqqSccFxYW0qpVK1JSUliyZAmzZs1i+fLlvPrqqwAMGjSIyZMnc9ttt7Fz505GjhzJ/PnzOXToEMXFxcTGxla0FxdXeY2+k5N6EkREJOikpaWxdOlS3n//fcLDw0lMTKy4NnXqVP71r3/x1VdfsW3bNubNm0fHjh0JCwsjOzu7olxWVpYvQvcrShJERCToJCYmEhISwp133sn1119fcf7TTz/lk08+4dixY7Ru3ZpWrVoREhJCaGgoI0eOZPbs2Rw6dIivvvqKl1/Wm/N+OdwgIiJ+IoDXMUhNTeXee+/l738//gw//vgjd9xxB99++y2tWrXi8ssv5+67XVv7LFy4kAkTJtClSxf69OnDhAkT+PDDD30Vvl9wOJ1OX8fgVzIyMpyBNydBRESqW7RoEX/5y1/46KOP6lU/PT2d559/vt71A4jD0wUNN4iISNA5dOgQTz31FDfffLOvQwloShJERCSovPPOO3Ts2JHOnTszduxYX4cT0DTcUI1jfvHxX8iRmnOopEOl9b7PgKLDJ5w7s+CgV3Xj9++v07065+XWeL3DruqbanoWkVV1n6wwsj2ULFd1dvC+S9e7LbW1d/4J5z7rUvV4Q1TNd1rRqs/xg9B+tcTVCEK8vEfpeVWPj/assbhzUpt6BiQiUi8abhAREZG6UZIgIiIibilJEBEREbcaJUkwxqw2xkzzsmymMWZcY9xXREREmk6jLKZkrR3aGO2IiIiI/9Bwg4iInFTS09Pp37+/r8MICI3Sk2CMWQesBZYA3wETgWlAZ2AdMNFa+0OlKj2MMe8DvwQygZuttf8oaysMuAcYD7QFPgN+Z63dUnY9HQgFDgOjgCJgjrX22UrxJAEPAX2BA8BTwKPWWr3vKSJSB/sGt23S9tuvPdCk7WdmZtKzZ0+OHTtGWJh2IqirpupJSAUGALFAKa7kobIbgKlAFPAeUHkXjbvL6l8BdAE2Au8ZYyq/PP4bIANoB0wBFhpj4gCMMX2Bt4F5QEfgSlw7AV+PiIiIeK2pkoT7rLV7rbU/4vrSv8wYE1Pp+rPW2i+ttSXA88DpxpjypXImAI9Ya7+21h4B5gAluL7sy31grV1prS211r4O5AMJZdduBVZYa9+01pZYa78GFuJKPEREJEDFx8fz0EMP0bdvX9q2bcuECRM4fPgwZ599NhkZGRXljh07RocOHfj3v//NgAEDAIiOjiYiIoJNmzZVlLvrrrto27YtPXv2ZPXq1RXnc3JyGD58OO3ateP000/nueeeq7g2e/Zsrr32WlJTU4mMjOSss87CWtsMT+8bTZUkZLr53L3SucpL/BWV/RtZ9m8sriELAKy1pWVtxHqoX95Gef2ewBhjTH75DzAL6FqnJxAREb+zdOlS3nnnHXbs2MG2bduYO3cuqampLFlyvMP67bffpmvXrpx77rls2LABgPz8fAoLC0lMTATgk08+oXfv3uTl5TFt2jRuvPFGylcgHj16NN27dycnJ4dXX32Ve+65hw8++KCi/ZUrVzJ69Gjy8/MZPnw4kydPbsbfQPNqqgGaeGBHpc8Au9yWPFF2pToYY0LKjmtb/7fcTuBFa+1tXpYXEZEAMXnyZGJjXX8zzpgxgylTprBx40buv/9+fvzxR9q0acPixYu5/vqaR5jj4uKYOHEiAGlpadx66618//33HDt2jI8//phVq1bRqlUrEhISuOmmm1i0aBGDBg0CoH///lxxxRUAXH/99Tz22GNN+MS+1VRJwr3GmC3AT8AjwFprbY6XddOBacaYDbh6EP4PV5yrvKz/FLDeGLMGWAM4gZ8BHa217jcOEBGRgFCeIIDriz4nJ4eYmBj69evHa6+9xtVXX83q1at5/PHHa2ynS5fjm8OEh4cDUFhYyL59+2jXrh2RkZEV1+Pi4qoMKVSve/jwYYqLi4NyYmRTDTcswTXhMBs4hbpNGpwHLAfeBb4HBgFDyuY31KrsLYhhwO24hiV+wJV4dKxDDCIi4oeys493KmdlZRET45rulpaWxpIlS1ixYgWJiYl069YNAIfD495FbsXExLB//34OHjy+4V5WVlZFeyebxlpMaSCAMSa+7NQaa+3zHsrGVzvOpNIOVNbaY7jmEMzyUH+8F21uAi71KngREQkYTz75JMOGDSM8PJwHHniAlJQUAEaMGFExZDBt2vEFgDt27EhISAjffvstP/vZz2ptPzY2losvvpjf//73zJ8/n23btvHCCy+wdOnSJnsmfxZ8fSMiItJomnodg7oaO3YsQ4YMIScnh6uuuoqZM2cCcOqpp3LNNdewfPlyRo4cWVE+PDycGTNm0K9fP44dO8aaNWtqvcfy5cu55ZZbiImJoW3bttx3330MHjy4yZ7JnylJEBGRgHHBBRfw+9//3u21Hj16cPXVVxMREVHl/Jw5c5gzZ07F8UUXXcT48eOrlCl/swGge/fuvPXWW27vMXv27CrH8fHxVeoGm0ZNEqoPHQSilb1Xk5yc7GXphkzpiPDynDuB+zZnew/nL/biXPC+ZCQiDbV//35eeOEFFi9e7OtQgor2bhARkYD23HPPERsby9ChQysWT5LG4QjmbpL6yMjIcHrfkyAiIhLwPI4AqCdBRERE3FKSICIiIm4pSRARERG3lCSIiIiIW0oSRERExC293VCNY35x/X4hR2rPt5IOldaraYABRYfrXOfMAtfa4/H7959wrXNebo11O+yqvht3w0Rked4ENMyrDT6zTjiz79La9+va2ju/yvFnZfuybIjyXGdFqz6eL4b2O/45pF/Va6XnVT0+2tNjM85JbTzfQ0SkeXl8u0ErLoqIiEf/uK1p18e7+En9oerPNNwgIiIibilJEBGRgJCdnc3IkSPp2LEj7du3Z/LkyWzfvp1LLrmEqKgoOnToULErpNPp5I477qBTp060adOGn//852zZssXHTxB4NNwgIiJ+r6SkhGHDhjFo0CAWL15MaGgo1lruvfdehgwZwocffsjRo0ex1gLw7rvvsmHDBrZt20ZUVBT/v707D4+iSB84/q2EI2CAcEMgB5dcnlAiiASQQ1gFREAOARE8QBTBY0WQBUVAwFXxRFhFEUWBVVhYEUXl8qdCyeKBAnIkJATlTEyCQDD9+6M7YTLM5E5mJryf58mT6a7u6rdrunuqq6u7d+3aRVhYmI/XIvBIS4IQQgi/t3XrVhITE5k7dy6XXHIJISEhXH/99ZQtW5a4uDgSExOzxgGULVuWlJQUdu3ahWVZNG/enLp1A/fleL4ilQQhhBB+Lz4+nqioKMqUyd4APmfOHCzLok2bNrRs2ZI333wTgBtuuIH777+fsWPHUqtWLe655x7++OMPX4Qe0Pz6coPWOhSYBtwK1ATigXuBesDjQAMgDfgP8JAxJs2ZLxZYAHQBrgVigXuMMf9XoisghBCiSERERHDwyRAOawAAIABJREFU4EHOnTuXraJQp04dFi5cCMCWLVvo2rUrMTExNG7cmHHjxjFu3DiOHDnCbbfdxty5c5k+fbqvViEg+XtLwhvYP/JdgMpAb+AwkAwMAcKADs7fE27zjgTGAVWAz4C3SyZkIYQQRa1NmzbUrVuXiRMnkpaWxunTp/nqq69Yvnw5CQn2c1iqVq2KUoqgoCC2bdvGt99+S3p6etbliaAgf//J8z9+25Kgta4F3AZcZow54Ize6/YfYK/W+lVguFsWrxtjdjp5/QsYr7WuYoxJLs64hRCiNPGX5xgEBwezevVqxo0bR2RkJEophgwZQkhICOPHjyc5OZnatWszb948GjZsyIEDB5gwYQL79+8nJCSEG2+8kUcffdTXqxFw/LaSAEQ7//e4J2ituwH/AJoB5YFg4IjbZK6PDExz/lfCboUQQggRYCIjI1m5cuUF4+fMmXPBuC5duvDDDz+URFilmj+3vcQ6/5u4jtRalwNWAu8DkcaYysBj5PBYSSGEEELkn9+2JBhjjmitVwCvaq1HAHFAIyAEu/XgpDHmT611C+B+30UqhBBClE7+3JIAdufDHcBGIAVYhd1ZcQwwR2udCrwCvOezCIUQQohSym9bEgCMMSnAeOfP1RZgodu4p1zmi3bLJxa5HCGEEELki7+3JAghhBDCR6SSIIQQQgiPlGX5xz2w/mL16tVWr169fB2GEEIIUVK8Xo6XlgQhhBBCeCSVBCGEEEJ45Nd3NwghhPCtl6cX741h908pukveH3/8MUuWLOG9995j+PDh9O/fn969exdZ/hcjaUkQQghRKnz33XdorbM+t2rVyscRBT6pJAghhAgI8fHx3HrrrdSsWZPq1atz//3ZH7ZrjKF169akpaVx4sQJ6tev76NISw+pJAghhPB7f/31FzfffDNRUVHExsZy6NAhBg0aBEDTpk0JCwtjzZo19O7dm9q1a3Ps2DHCwsK49957fRx5YJNbIN2oZ8/lrUDOFG/9qsOpjALPG5N2usDzNk9OIfrEiTxNW/vYUa9pNRIOe01zFXowweP4MsTnMNdBrynHu2zM+ry7aVKOy95eBzZV8Z6+PKRZ9hHB7Z0PD+SYL2cb5JwOWGMq5zqNEP7AX/okfP311/Tu3ZvDhw9TpsyF3enWr1/PK6+8wkcffcQ999xDt27dGDBgQFGHW1p5/ZKl46IQQgi/Fx8fT1RU1AUVhL///e8sWLCAP//8kzJlyhAWFkZKSgrLli3jgQce4LfffvNRxKWDXG4QQgjh9yIiIjh48CDnzp3LNn7OnDkkJSXRoEED9u7dy8aNG2nXrh1JSUlSQSgCUkkQQgjh99q0aUPdunWZOHEiaWlpnD59mq+++gqAlJQUUlJSqFu3Ltu3b8+6w0EUnlxuEEII4VVRPsegMIKDg1m9ejXjxo0jMjISpRRDhgyhffv2/O9//+Oqq64CYPv27bRp08bH0ZYeAVVJ0FpHAj8DlxpjEr1M0wlYb4wJqHUTQgiRs8jISFauXHnB+JiYGGJiYgB46aWXSjqsUi2gfkiNMQeB0MxhrfUI4AljTGOfBSWEEEKUUtInQQghhBAe+UVLgtb6VuAZY8ylzvBTwBSgkTFmv9a6DfAZ0Br4FYhw/uYD5bTWqU5WN7vkORCYCdQA1gGjjDEpJbRKQgghRMDzl5aEL4CGTp8DgG7AXqCry/BGIOveF2PM18BoYL8xJtT52+AkBwPdgSuBS4GrgXHFvRJCCCFEaeIXlQRjTBKwHeiqta4MtARmYFcOwK4srM9nthONManGmN+BlYDcEyOEEELkg19UEhzrsSsDnYGvgY+BzlrrUKAd+ask/GWMcX1mcBpQqagCFUIIIS4G/lZJuAG79eAzY8wR4BAwHjhujPnZwzwFf8GBEEIIIXLkT5WEr4DKwDDsTooAnwOPOv89+Q2o5VyiEEIIIUQR8ptKgjHmDLAFOA384Ixej11x8Hap4UvsCsUBrXWS1rpjsQcqhBBCXCT84hbITMaY7m7DH+PyCktjTKzbcDrQz0NW2dbLGDOtKOMUQoiLxW0vFu+ropeN84/HPhfWiBEjqF+/Pk8//bSvQylSftOSIIQQQgj/oiyrdNTiisrq1autXr16+ToMIYTwC/7UkvDMM8+wcOFCjhw5QkREBDNmzKBv377s3buXUaNGsWPHDsqWLUuXLl344IMPAFBKMW/ePF544QX++OMP7rzzTmbPnk1Q0IXnyCNGjOCSSy4hNjaWTZs20aJFC9577z0aNWoEwK5du3jggQf47rvvqFmzJtOnT+e2225jwYIFjB07FqUU5cqVo3PnzqxevbpoCqhkeP2S/epygxBCCOFNo0aN2Lx5M3Xq1GH58uUMHTqUvXv3MmXKFLp3786XX37J2bNnMcZkm++jjz7CGENqaipdu3aladOm3HXXXR6X8f7777N27VpatWrFHXfcweTJk3n//fdJS0ujW7duPPXUU6xdu5Yff/yRbt26cdlll3HPPffwf//3f3K5QQghhPCVAQMGEB4eTlBQEAMHDqRJkyZs3bqVsmXLEhcXR2JiIiEhIVx//fXZ5nvssceoVq0akZGRjB8/nqVLl3pdRt++fWnTpg1lypTh9ttvZ8eOHQCsWbOG6Oho7rzzTsqUKcPVV19Nv379WL58ebGus69JJUEIIURAWLx4MVdddRVhYWGEhYXx008/cezYMebMmYNlWbRp04aWLVvy5ptvZpsvIiIi63NUVBSJiYlel1GnTp2szxUrViQ11X41UFxcHN9++23WssPCwnj33Xf57bffingt/YtcbhBCCOH34uLiuPvuu/n8889p164dwcHBXHXVVViWRZ06dVi4cCEAW7ZsoWvXrsTExNC4cWMA4uPjadmyJQAHDx4kPDw838uPiIigY8eOfPbZZx7TlSrevhu+Ii0JQggh/F5aWhpKKWrWrAnAokWL+OmnnwBYvnw5CQkJAFStWhWlVLaOiXPnzuXkyZPEx8czb948Bg4cmO/l33zzzezZs4d33nmH9PR00tPT2bZtG7/88gsAtWvXZv/+/YVdTb8jLQlCCCG88pfnGLRo0YKHH36Ydu3aERQUxPDhw2nfvj0A27ZtY/z48SQnJ1O7dm3mzZtHw4YNs+bt06cPrVu3Jjk5mREjRjBq1CgANm/eTM+ePbMuKeSkUqVKfPrppzz00EM89NBDZGRkcOWVV/Lcc88BMGrUKAYMGEBYWBidOnVi5cqVxVAKJU9ugXQjt0AKIUTpoZTi119/zbr0IDySWyDzqvfunrD7XNFkdqZgV3M6nCr4e6ti0k5nfW6enEL0iRMFzgug9rGjF4yrkXDY6/ShBxMoQ3w+l3IwT1Md77KR3U2T8pXz9jqex2+qAstDmnlODLbPTghqf35cRqsLpzvbwOtyrTHyOhEhROCTPglCCCGE8EhaEoQQQpRackm9cKQlQQghhBAe+bQlQWtdDVgKtAX2GmNa+zIeIYQQQpzn68sNo4FQoLoxpoh6CwohhBCiKPj6ckND4BdPFQStdVkfxCOEEEIIh89aErTWq4EezudBwHdAe+BO4EmgJlBJa30F8AJwNXASeBOYZYz5S2sdDRwARgCPAVHARuB2Z3gkkAFMN8a8UlLrJoQQQpQGPmtJMMb0At4F3jbGhAJTgWDgb9gVgtpa6yrAZ8CXQB3gJuwf/ofcsusHXA9EAtHAt8A+IBy70vGC1jqymFdJCCFEMWrZsiUbNmzwdRi5mjlzptdXUQcaX/dJ8OQxY0wygNZ6CHAWeNoYYwG/aK1nY1cS5rrMM90Yc8KZZw1wkzFmoZO2Vmt9Ervikben9gghhABALWherPlb9/yS52l37txZjJEUnUmTJuV52mnTprF3716WLFlSjBEVnK/7JLjLgGyP64sA4pwKQqZ9znhXro8APOU2nDmuUlEFKYQQQnhy7lzp6oPvb5UEy61CEA9Eaa1dnyvdEPL93F8hhBABLjo6mvXr17N161batWtHWFgYdevW5f777+fs2bMAjBkzhkceeSTbfH369Ml6EdMzzzxDo0aNqFSpEi1atOCjjz7Kmm7v3r107NiRKlWqUKNGjWxvi9y5cyfdunWjWrVq1K5dm5kzZwJ2S0D//v0ZOnQolStX5q233mLatGkMHToUgNjYWJRSLFiwgPDwcOrWrcuzzz4LwCeffMLMmTP54IMPCA0N5corryy+wisgf6skuPsvUB6YpLUup7Vuit0h8Q3fhiWEEMJXgoODef755zl27Bhff/01n3/+Oa+++ioAgwcP5oMPPsh60uLJkyf59NNPGTRoEACNGjVi8+bNJCcnM3XqVIYOHcrhw3bj85QpU+jevTsnT54kISGBBx54AICUlBS6du1Kjx49SExMZO/evXTp0iUrnlWrVtG/f3+SkpK4/fbbPcb85Zdf8uuvv/Lpp58ye/Zs1q9fT48ePZg0aRIDBw4kNTWV77//vtjKrKD8upLg9E3oDnQFfgfWAYuB53wZlxBCCN9p3bo1bdu2pUyZMkRHR3PvvfeyceNGADp06IBSis2bNwOwYsUK2rVrR3h4OAADBgwgPDycoKAgBg4cSJMmTdi6dSsAZcuWJS4ujsTEREJCQrj++usBWLNmDXXq1OHhhx8mJCSESpUqce2112bF065dO2655RaCgoKoUKGCx5inTp3KJZdcwuWXX86dd97J0qVLi618ipJPOy4aY0a4fN6Ah3iMMTuAzl7mj8XtFZfGmGkeposuTJxCCCH8x549e3jooYcwxnDq1CnOnTtH69b2A3uVUgwaNIilS5cSExPDe++9l9X0D7B48WKee+45YmNjAUhNTeXYsWMAzJkzhylTptCmTRuqVq3Kww8/zMiRI4mPj6dRo0Ze44mIcO8ml/M0UVFR/PjjjwVZ9RLn1y0JQgghhLsxY8bQrFkzfv31V/744w9mzpyZ7UVOgwcPZsWKFcTFxfHtt9/Sr18/AOLi4rj77rt5+eWXOX78OElJSVx22WVZ89apU4eFCxeSmJjI66+/zn333cfevXuJiIhg//79XuNRSnlNyxQff74r3cGDB7NaNvIyry9JJUEIIURASUlJoXLlyoSGhrJr1y5ee+21bOlXX301NWrU4K677uLGG28kLCwMgLS0NJRS1KxZE4BFixbx008/Zc23fPlyEhISAKhatSpKKYKCgrj55ps5fPgwL7zwAmfOnCElJYVvv/02XzFPnz6dU6dOsXPnThYtWpTVKbJ27drExsaSkZFR4PIoTv74nASf+k/TtfTq1cvHURSm7hbq9rluIWPxH9WB6/I5j7fp7y9kLEJcLPLzHIOS8uyzz3LPPfcwZ84crr76agYOHMgXX3yRbZohQ4bwj3/8g2XLlmWNa9GiBQ8//DDt2rUjKCiI4cOH0759+6z0bdu2MX78eJKTk6lduzbz5s2jYcOGAHz22Wc8+OCDPPnkk5QvX57x48dn65eQm44dO9K4cWMyMjJ45JFH6N69O2D3kViyZAnVq1enQYMGbN++vTBFU+SUvGs7u9WrV1u+ryQIIYRwFxkZyZIlS4iJifF1KHkWGxtLgwYNSE9Pp0wZvz0v93rNQy43CCGE8HtHjx7l6NGjREdH+zqUi4pUEoQQQvi1bdu20aRJEx544AEiI+U1PCXJb9s+hBBCCIBrrrmGpKQkX4dRINHR0QTyZX1pSRBCCCGER1JJEEIIIYRHcneDG/XsObtAzhSs/tThVPZ7XWPSTuc6T/PkFACiT5zINr72saP5WnaNBPeXX3oWejAhx/QyF7w/y/sbto932ZinZe5uajcVbq/jfZpNVXLPZ3lIszwtD4Dg9hDUPvfpADJa5T3fTGcbZBu0xlTOfx5CCOF7cneDEEIIIfJHKglCCCGE8EgqCUIIIQJCy5Yt2bBhQ67TRUdHs379+uIP6CIgt0AKIYTwSr1xV7Hmb436V56n3blzZzFGIjyRlgQhhBBCeCSVBCGEEAEh8zLCtGnT6N+/PwMHDqRSpUq0atWK77//Ptu0O3bs4IorrqBKlSoMHDiQ06fP32m2cOFCGjduTLVq1ejduzeJiYlZaUop5s+fT5MmTQgLC2Ps2LHZHob05ptv0rx5c6pWrcqNN95IXFxc8a+4D0klQQghRMBZtWoVAwYM4MSJEwwZMoRbbrmF9PT0rPRly5bxySefcODAAX744QfeeustAL744gsef/xxli1bxuHDh4mKimLQoEHZ8l6zZg3btm3jhx9+YNmyZaxbty5rmTNnzuTDDz/k6NGjdOjQgcGDB5fYOvtCQPRJ0FrHAv8CugDXAAeA24GWwHSgJrAcGA28C/xmjHnQZf6RwCSgiTFGHgwhhBABrnXr1vTv3x+Ahx56iH/+85988803dOjQAYBx48YRHh4OQK9evdixYwcA7777LiNHjqRVK/vZKLNmzaJq1arExsZmvTxq4sSJhIWFERYWRufOndmxYwc9evRg/vz5PP744zRv3hyASZMmMXPmTOLi4oiKiirJ1S8xgdSScAdwH1AV+B74COgMXAlcDvQGBgKvA0O11uVd5r0L+JdUEIQQonSIiIjI+hwUFET9+vWzXTaoU+f8k9sqVqxIamoqAImJidl+0ENDQ6levTqHDh3Kdd64uDgefPDBrApEtWrVsCwr27ylTSBVEhYYY34xxqQD7wENgcnGmDRjzEFgA6CBL4HjQF8ArXVzZ/xbvghaCCFE0YuPP/9k2IyMDBISErJaDnISHh6erR9BWloax48fp169ernOGxERweuvv05SUlLW359//sl1111XsJUIAIFUSXB95vAp4C9jzFG3cZWc1oKF2K0HOP/XGGN+K5kwhRBCFLfvvvuODz/8kHPnzvHCCy9Qvnx52rZtm+t8gwcPZtGiRezYsYMzZ84wadIkrr322qxLDTkZPXo0s2bNyroVMzk5meXLlxd2VfxaQPRJKIC3gKe01pcCw7AvVQghhMin/DzHoCT16dOHDz74gDvuuIPGjRvz4YcfUrZs2Vzn69q1K9OnT6dfv36cPHmS6667jvfffz9Py+zbty+pqakMGjSIuLg4qlSpQrdu3RgwYEBhV8dvlcpKgjHmqNZ6FfA+8CewzschCSGEKKTY2FgAtmzZQkhICEuWLMlxukzTpk3LNjx69GhGjx7tcV73lx5m3hWRadiwYQwbNizPMQe6QLrckF+vA1cDbxpjMnKbWAghhBDZBURLgjEm2m14A26xG2NGuM0WC/wFvFl8kQkhhBClV0BUEvJLa10GeAz4yBgTn9v0QgghAof75QNRfEpdJUFrrYGNwH7gZh+HI4QQQgQs5d5J42K3evVqq1evXr4OQwghhCgpyltCae64KIQQQohCkEqCEEIIITySSoIQQgghPJJKghBCiIA2YsQInnjiCa/poaGh7N+/v8iXW1z5+pNSd3eDEEKIoqMWLSrW/K077yzW/IGstzgWRqdOnRg6dCh33XVX1riiyNffSUuCEEIIITySlgQ3vXf3hN3n4EwQHU7l/WnOMWmn8zxt8+SUXKeJPnEi12lqHzuabbhGwvkXZYYeTMhzPO7KkN/nTx0s0HKOd9mYY/rupkl5zmt7Hc/jN1U5/3l5SLPzA8HtIaj9+eGMVrkv5GyDbIPWmMp5jk8IUXi//PILY8aMYceOHdSrV49Zs2bRu3dvAI4dO0a3bt345ptvaNWqFYsXLyYqKgoApRS//vorjRs35syZM0yePJlly5Zx5swZ+vbty/PPP0+FChUAWLVqFVOnTmX//v3UrFmTV155hc2bN7N582a++eYbxo8fz4gRI3j55Zez8j1+/Dh9+vTh0KFDBAcHA/DRRx8xdepUfvjhBzIyMpgzZw4LFy4kKSmJLl26MH/+fKpVq+ZxPU+cOMHDDz/MunXr+PPPP+nYsSMrV64sgRK+kLQkCCGE8Hvp6en06tWL7t27c+TIEV566SVuv/12du/eDcC7777LlClTOHbsGFdddRW33367x3wmTpzInj172LFjB3v37uXQoUM89dRTAGzdupXhw4czd+5ckpKS2LRpE9HR0cyYMYMOHTrw8ssvk5qayssvv5wtz2uvvZZLLrmEL774Imvce++9x5AhQwB46aWXWLlyJRs3biQxMZGqVasyduxYr+s6bNgwTp06xc6dOzly5AgTJkwoVNkVhlQShBBC+L1vvvmG1NRUJk6cSLly5bjhhhu4+eabWbp0KQA33XQTMTExlC9fnhkzZvD1118TH5+9VdSyLBYsWMDzzz9PtWrVqFSpEpMmTcp6VfQbb7zByJEj6datG0FBQdSrV49mzZpdEIsngwcPzoolJSWFjz/+mMGDBwMwf/58ZsyYQf369SlfvjzTpk1jxYoVnDt37oJ8Dh8+zNq1a5k/fz5Vq1albNmydOzYscDlVlhSSRBCCOH3EhMTiYiIICjo/M9WVFQUhw4dAiAiIiJrfGhoKNWqVSMxMTFbHkePHuXUqVO0bt2asLAwwsLC6NGjB0eP2pdu4+PjadSoUYHiGzJkCB9++CFnzpzhww8/pFWrVlmXO+Li4ujbt2/WMps3b05wcDC///47o0ePJjQ0lNDQUGbOnEl8fDzVqlWjatWqBYqjqJWaSoLWeq3W+u9e0oZqrWNLOCQhhBBFJDw8nPj4eDIyzvcVO3jwIPXq1QPI1mqQmprKiRMnCA8Pz5ZHjRo1qFChAjt37iQpKYmkpCSSk5Oz7lKIiIhg3759HpevlNcnFwPQokULoqKiWLt2bbZLDZn5rl27NmuZSUlJnD59mnr16jF//nxSU1NJTU1l0qRJREREcOLECZKS8t4nqziVmkqCMaanMWaOr+MQQghR9K699loqVqzInDlzSE9PZ8OGDaxevZpBgwYB8PHHH7NlyxbOnj3LlClTaNu2bbbWBYCgoCDuvvtuJkyYwJEjRwA4dOgQ69atA2DUqFEsWrSIzz//nIyMDA4dOsSuXbsAqF27dq7PRBgyZAjz5s1j06ZNDBgwIGv86NGjmTx5MnFxcYDdorFq1SqPedStW5eePXty3333cfLkSdLT09m0aVMBSqxoyN0NQgghvCqJ5xjkRbly5Vi9ejX33Xcfs2bNol69eixevDirz8CQIUN48skn+frrr2nVqhVLlizxmM/s2bN56qmnaNu2LceOHaNevXqMGTOGG2+8kTZt2rBo0SImTJjAgQMHqF27Nq+88grNmjXjwQcf5I477uC1115j2LBhvPjiixfkPXjwYB5//HF69uxJjRo1ssY/+OCDWJZF9+7dSUxMpFatWgwcOJA+ffp4jPGdd95hwoQJNGvWjLNnz9K5c2diYmKKoBTzr9S8BVJrvQFYb4x5WmvdBngVaAbsAD4FRhpjonPLRz17zi4QuQUyH+QWSCGEf8rIyCA4OJi4uDgiIyN9HY6/unjeAqm1rgKsBVYA1YAJwH0+DUoIIYRP/PTTT4SEhFCnjpczCZGjUldJAG4G0oDZxpizxphtwBs+jkkIIUQJ+/e//03nzp2ZPXs25cqV83U4Aak09kmoD8QZY1yvoxzwVTBCCCF8o1+/fvTr18/XYQS00tiScAiI0lq7XmOJ9lEsQgghRMAqjZWENUAo8KjWuqzWuhUwyscxCSGEEAGn1FUSjDFJwE3AQOAk8CLwmk+DEkIIIQJQqemTYIzp5PL5a6C12yRPlWhAQgghRIArdS0JQgghhCgaUkkQQghRakVHR7N+/XpfhxGwSs3lhqLyn6Zr6dWrlzOUnzpUaBFPWzcf+QWm6rmkX5ePvLxNe38+8hBCXEi98X2x5m+NurJY8xeFIy0JQgghhPBIKglCCCECwuzZs6lXrx6VKlWiadOmfP755/z555+MGDGCqlWr0qJFC+bOnUv9+vV9HWqpIZcbhBBC+L3du3fz8ssvs23bNsLDw4mNjeWvv/7iySefZN++fezbt4+0tDR69uzp61BLFWlJEEII4feCg4M5c+YMP//8M+np6URHR9OoUSOWLVvG5MmTqVatGhEREYwbN87XoZYqUkkQQgjh9xo3bswLL7zAtGnTqFWrFoMGDSIxMZHExEQiIiKypouKivJhlKWPVBKEEEIEhCFDhrBlyxbi4uJQSvHYY49Rt25d4uPjs6Y5ePCgDyMsfaSSIIQQwu/t3r2bL774gjNnzhASEkKFChUICgritttuY9asWZw8eZKEhAReeuklX4daqkjHRTe9d/eEHzIKNG+HU97ni0k77TWteXJKtuHoEycAqH3sqNd5aiQcBiD0YELWuDLEu03lvUZ9vMtGj+N3N03yOk9J2F7H8/hNVXKfd3lIM88Jwe09jw/yMD6jledpzzbINmiNqZx7QEKUAv7yHIMzZ84wceJEfvnlF8qWLct1113HggULCAsLY/To0TRo0IDw8HDuvPNO5s2b5+twSw2pJAghhPB7V1xxBVu3bvWYtnjx4qzPGzZsyJYWGxtbjFGVfnK5QQghhBAeSSVBCCGEEB5JJUEIIUSp0alTJxISEnKfUOSJVBKEEEII4ZFUEoQQQgjhUcDc3aC1DgWmAbcCNYF44F6gHjAVqA+cAj4xxtyhtVbA08CdQCXgOPBPY4zcRCuEEELkQSC1JLwBXAt0ASoDvYFk4B1grDGmEtAQ+JczfTfgDuBaJ60NsKWkgxZCCCECVUC0JGitawG3AZcZYw44o/dqrSsC6UAzrfUOY8wJYLOTfhYIAVpqrY8aY44AR0o6diGEECJQBUpLQrTzf4/rSGPMKeBvQA9gn9b6O631ECdtAzAJeAI4orX+VGutSyxiIYQQIsAFREsCEOv8bwL87JrgVAY2aK2DsS9B/Ftr/a0xZp8xZgGwwGlxmAZ8CESWVNBCCBHo1Gt/FGv+Jf2I89jYWBo0aEB6ejplygTKT6DvBEQJGWOOaK1XAK9qrUcAcUAjoDp2h8X1xphkrXXmiwf+0lq3AcoDW4EzQApSWA2qAAAU6UlEQVTwV4kHL4QQQgSoQLncADAS2AFsxP7BXwWUA8YCsVrrFOAV4A5jTCwQCswDjmHf2dAdGFjyYQshhChKsbGxKKV4++23iYyMpEaNGsyYMSMrPSMjg2eeeYZGjRpRvXp1brvtNk44L86LiYkBICwsjNDQUL7++mufrEOgCIiWBABjTAow3vlzdYOX6b8AvLzSTwghRKDbsmULu3fvZs+ePbRp04Zbb72V5s2b89JLL7Fy5Uo2btxIzZo1GTduHGPHjmXp0qVs2rSJBg0akJSUJJcb8iCQWhKEEEKILFOnTqVChQpceeWVXHnllXz//fcAzJ8/nxkzZlC/fn3Kly/PtGnTWLFiBefOnfNxxIFHqlFCCCECUp06dbI+V6xYkdTUVADi4uLo27cvQUHnz4ODg4P5/fffSzzGQCeVBDf/abqWXr16FXDunBpmQvORVreAy8+76l7GX1fsS86Zt+XfX6JRCCECWUREBG+++Sbt27e/IC0uLs4HEQUuudwghBCiVBk9ejSTJ0/OqhAcPXqUVatWAVCzZk2CgoLYv3+/L0MMGNKSIIQQwquSfo5BTnr27EmHDh0YMmRIjtM9+OCDWJZF9+7dSUxMpFatWgwcOJA+ffpQsWJFJk+eTPv27UlPT+eTTz6hbdu2JbQGgUdZluXrGPzK6tWrrYJfbhBCCCECjvKWIJcbhBBCCOGRVBKEEEII4ZFUEoQQQgjhkVQShBBCCOGRVBKEEEII4ZFUEoQQQgjhkVQShBBCCOGRVBKEEEII4ZFUEoQQQgjhkVQShBBCCOGRVBKEEEII4ZFUEoQQQgjhkbzgyU358uV/Onv27GlfxxEoypQpU+PcuXPHfB1HIJEyyz8ps/yTMsu/i7jMjlmW1cNTgrwq2s3ll19+2hijfR1HoNBaGymv/JEyyz8ps/yTMss/KbMLyeUGIYQQQngklQQhhBBCeCSVhAst8HUAAUbKK/+kzPJPyiz/pMzyT8rMjXRcFEIIIYRH0pIghBBCCI+kkiCEEEIIjy6KWyC11pcCbwPVgePAcGPMr27TBAMvAj0AC3jGGPOv3NJKqyIosynAIOAvIB2YZIxZV3JrUPIKW2Yu0zQF/ge8aox5pCRi95WiKDOt9W3AFEA56V2NMb+XzBqUrCLYL2sBi4AIoCzwJTDOGHOuxFaihOWxzLoDM4HLgZdc97uL8fjv6mJpSZgPvGKMuRR4BXjdwzS3A42BJkA7YJrWOjoPaaVVYctsK3CNMeYKYCTwgda6QrFH7VuFLbPMA9LrwMpij9Y/FKrMtNYamAZ0M8ZcBlwPJBd/2D5T2G1sEvCLs19eAbQGbi3uoH0sL2W2H7gLmOsh7WI8/mcp9ZUEp+bcCljqjFoKtNJa13SbdCCw0BiTYYw5in2QHpCHtFKnKMrMGLPOGHPKme4H7LO86sUevI8U0XYGMBFYA+wp5pB9rojKbALwrDHmNwBjTLIxplQ+MbWIyssCKmmtg4DyQDngULEH7yN5LTNjzF5jzA7AU4vKRXX8d1fqKwnYzWqHjDF/ATj/E53xriKBOJfhgy7T5JRWGhVFmbkaDuwzxiQUQ6z+otBlprW+ErgReL7Yo/UPRbGdtQAaaq03aa23a62f0FqrYo7bV4qivKYDlwKHgd+AdcaYr4ozaB/La5nl5GI7/mdzMVQShA9prTtiH5gG+zoWf6a1Lot9j/bozAOayJNg7GbzbkBHoCcwzKcR+bcB2C17dYF6QIzWur9vQxL+7GKoJMQD9ZxrvZnXfMOd8a4OAlEuw5Eu0+SUVhoVRZmhtW4HLAFuMcbsLtaIfa+wZVYXaAR8rLWOBcYDd2utS/PDXYpq31xhjDljjEkBVgFtijVq3ymK8noAeNdpOk/GLq/OxRq1b+W1zHJysR3/syn1lQRjzBFgB+fPZAcD/3OuLblajn1QDnKuV90CrMhDWqlTFGWmtb4G+ADob4zZXjKR+05hy8wYc9AYU8MYE22MiQZewL4Oek8JrUKJK6J98z2gu9ZaOa0xXYDviz/6kldE5XUAu5c+WutyQFfgp+KO3VfyUWY5uaiO/+5KfSXBMRp4QGu9B7smPRpAa/2x0zsa4B3sHq6/At8ATxljDuQhrbQqbJm9ClQAXtda73D+Li/RNSh5hS2zi1Fhy+x94AjwM/aPwU7gjZILv8QVtrzGAx201j9il9ceYGEJxu8LuZaZ1vp6rXUC8BBwr9Y6QWt9ozP/Rb3PymOZhRBCCOHRxdKSIIQQQoh8kkqCEEIIITySSoIQQgghPJJKghBCCCE8kkqCEEIIITySSkKAU0rdqJTa7DLcSSkV68OQSoxS6i2lVJG9jU0pFa2UslyGayql4pRSNfIw72il1DtFFUsgUEp1UEol+TqOi5FSamh+9vOi3ldEzopr3yjA9/6MUmp6YZYplYQAppRS2M/5n5rLdGOUUj8ppf5QSp1UShml1ECX9Fil1FAP810wXtn2OHmFuqV1UkpZSqlU5y9RKbVIKVWtcGvqG5ZlHcV+WE9u5XsJ8BT22wgvGpZlbbYsK8zXcXijlJqmlFrv6zguBsVV1kqpDUqpJ4o63+Lmvm/4cFucDYxVStUraAZSSQhs3bHf4valtwmUUoOxf+RGAVWwH0k6AThZwGV2BhoCGXh+H8NflmWFWpYViv3a3nbYTw8MVG8CdyqlKucwzVDgR8uy9pVQTNkopYKVUrIvCyGysSzrJLAWuLegeciBJY+cs+onlFJfOmfJPyqlrlBKDVZK7VVKJSul/qWUKuMyT6RSaoVS6jel1GGl1AKlVCWX9JlKqf1OfvuUUuNd0qKds/JhSqmflVIpSqlPlVJ1XcK6BVhv5fxErOuATZZlfWvZ/nRquZ8WsCjuBT7BfgpZjhueZVn7sV97fLV7mlKqjFMmt7iNf0sptcj53EUp9a3T+nFUKfW+UqqWt+U55XW9y3AnpdQ5l+EySqlJTktIklLqK6WU9pxb1jr8ChzDfnytN7cAn7nF8qBSapfzvR1USs1SSgU7aXOVUivdpu/kTHuJM3yZUmqds96Z85d10jK3jVFKqZ+BU0AtpdQgpdT3TivPYaXU65n5OfPVUUqtdrbVPc78llIq2mWau51Wp2Sl1P+UUt29rbSH8n1LKfWOUupNp3wPOfvHVUqpbc76famUCneZJ1Yp9Q+l1BZnPzBKqWtc0nPcBpRSZZ3vdLeT/z6lVH9lt5RNAjqp8y1bDb2sR0dnGcnOd3avS1onpdQ5pdRAJ+9kpdQy1/3YQ34FOVZcoZT6wlnP/c78wS7pbZyySVVKbcGuqLsus6JS6lml1AGl1Aml1CdKqcbeYvQQc3Wl1GJlH6t+U0q9rVxaAJVbq6LLNljfW1krpUY46/uYsz0eUUr908N2XN8l3xFKqb3O55eBDsAUJ0+P739R9ln650qp2c42clwp9ZBSKsop0xSl1HdKqeYu8xRqX3HZ1he6bOsXbDfO5xzLx21dsl0WKqLv/TPsY1TBWJYlf3n4A2KxH8vZHCiL/eKifdhv7rsE+6UfR4DbnelDgL3YzdAVgKrAx8CbLnkOxT6zV8ANwJ/AjU5aNPa739cANYDKwFfAQpf5vwXGucXZCYh1GR4AnAaexn6ufZiXdRua23igJnAGuBX7h98CWrst+5zLcGNgt+s6u+U/B1jpMhwKpAIdnOHrgWuAMkAdYBOw1GX6t4B/uQxbwPU5xDPDKbOG2G8PHIVdAajqWuYe4lwNPJ3DtvE70NttXD+ggfPdXu1Mc6+T1gI4C9R0mf5t4A3ncy3gOHYlrBz22/oM8A+3beNzp1zKOevTE2iJXflvjP2o4lkuy/gc+LezLdUCNjj5RDvpd2Nvs1c6efzN+T4ae1lv9/J9C3sbvsmZf7Qz/3+A+kBF4Auyb8Ox2K/ube2sx0TgKFA5j9vAbGc9r3DKuj5whZM2DbsSndN+3cCJeYSzjLbACWCAyzpa2I96DgVqYx8HJhfhsaKKs31MAco78+0HHnVJP+6UTTmnPH4j+37+LvaxorYzzZPALqCsp33FQ8yfYG/nVZ2//wL/zeFYEO2US31vZe2UaTrwCvYxsBH2Y6AnecrDZZ69LsMbgCdy+Q6nOcu5i/P7wV/Aerfv4DOXeQq7r7yFvd30dvK41Ykhysu+4a189rqNy/qeiuJ7d6Zpjd3yWy6ncvRavgWZ6WL8c3aSR12G/+ZsNK4H+mXA887n/sA+tzxaY//IBntZxgpgjvM5cwe6xiV9LPA/l+E9wAi3PDq5bkTOuJuBD7EPRH9hX564zG3d0oAkt78Msh8Y/o59cMs88GwHXndbtuXMexL7ZTLz8VAxcaZvjv1jWcsZHgnsyeE7uBk44jKctUM5w14rCdg/IClAjFueP2auI94rCe8Cr+YQ11mgUy7bz7PAMpfhb4EJzudKTvm3d4YfAb5wm78fzgHFZduIyWWZ9wNbnc/1nXkauqR3IfuB7ydguFseq/FykMZzJcH1h6Wik/8Al3H3kX0bjgWmuwwr7LfuDcltG3CmTQVu8jLtNHKvJEwCvnIbNwtY57ZNu+7nc4GPcsgzlvwdK4Zgv1VQuaTfC+x2Pt/ulIlr+gyc/Rz7JMICIl3Sg4BknP2BHCoJ2CcqFtDEZVxTZ1xdl3UqSCXhDFDRZdxdOPu4ex4u8xSkkrDTbdwRD9/BySLcV97CZVt3xh0F+njZN7yVT06VhEJ/7864Js50tXIqR29/Wc1dIk8Ou3w+hX39/ajbuMxmyAZApLqwh6uFfUZ0SCk1DvvsrT72Aa8Cdkc5b8tMc8kf7B/inK6V2wu0rDXYtU2UUs2wX760RinVwHK2Iuyz3CWu8ymXXrRKKeXEusSyrHRn9BvAM0qpRyzLSnHG/WXlsTObZVm/KKW2Y7eoPAfcCSxyWWZrYCb2mW1F7DIK9ZBVXtRw5l2tXO5gwD7LqO95liyVsSs83lzwPSi7L8hD2K0WZbBr+d+4TLIIGIPd8fQ2IMGyrK+ctAZAe7dtR2GfJbmKdVtmN+AfQDPsM9Jg7IMl2K0RYB90MsW55dcAeEUp9aLLuDJAAnmXtb1alnXK3mwu2G/cm+pjXeaxlFIHcb6TXLaBmthn5nvyEZ+7CC78bvcBfVyG3fdz9/3Qk/wcKyKAOJd9MTOGCOdzfQ/prjE3cP7/4JR3prIueeQkcxrXPPe5pB2m4I5YlnXKZTiW3Pe3gnCP8RQ5bHdFsK94WmZetov8KKrvvTLnT97yTfokFJ847BpzmNtfiGVZh5RS7bGbSu8Fajg/rKuxD4J59T/spus8syxrF/YPUxR2s2Je3YDdLDcy87oldtNWKPaZUEEtAkY419HaAotd0t7Hbq241LKsynjuKOkqFftHI1O4y+dj2DtxV7fv4xLLsp7JJd/LsMvam2zfg1IqArt582nsM7Eq2E2urt/t+8ClSqlW2GcUi1zS4rDPOlzjrGLZnUFdZbgssxyw0sk30imvx1yWecj5H+kyv+vnzOWOdFtuqGVZY3JY96IQnfnBqYxGcr5iktM2cBT74N/ES74ZXsa7inddvqOhM76kxANRKvuR3jWGQx7So10+Z/6ANXH77ipalrU0j8t3z7OhW1oK3vct8F7WtZRSFd3izvxuM08sCpJvgRXRvpJfntbDvUwh+/oX1fd+GXZLy9mCBC6VhOKzBiin7E5VlZStnlKqr5NeGbvp/yhgKaVuwr5Olh8rsZvBvFJKjVRKDVDOvf5OJ6HRwM+WZZ3Ix7Luxb4e3Ay4yvm7DPvH7Z58xu3qfezKx4vY1wwPuaRVxm46S1FKRWJfm8vJd8AdSqlyTgejhzITnNr4POBZpVQTAKVUqLKfM+F+YMriVF5qYl/f9GYl2Ts2hmLvW0eBdKVUW2CY6wyWZSUBH2FXJNpi90nItBjQzncXopQKcjo69cghhnLYZ0QnLcv6UynVArsJNXN5CdhNt88422NNwP3WsueBacruaKiUUhWUUtc7rU/FaaRSqpWyO7Q9it1i8F8nzes24HynrwJzlN3RUym7I90VziS/Ybfmlcth2UuB1kqp4cru2NoGe1svyddN/xf7u5vkbLtNsX+0MmNYg71NParsjpqtsPvTAGBZ1hHsFshXlXOrm1IqTCnVV7ndpuyJZVmJwKfAP535qgL/BNZalpV5tvwdMNjZZ2pi959w5a2sg4DZzrbUEPtS2tvOco/jVEyVfYfO5ditle755rkDZh4Vxb6SX57KZwd2JepmZx/vC8S4pBfV994N+xhVIFJJKCZOE9sN2GeYu7APdJ9j/7gCrMP+MdiKfZbbH/tHIz/WAeeUUp1ymOYkdrP2L0qpNOxr4UnY13bzRNm9yW8BnrUs6zfXP+zWkKtVLncJeGNZVjL2evfEvt3Q1T3Y1zBTsPtULM8lu/uxDygnsK/5vuWWPhVYBaxSSv2B3blsNDnvByOBt5w4vXkHuNI5CGJZ1i8uy0rC/mHzdEa3CHu917kcjHHKtTN2mcdif4cf4daz2ZVlWanY3/McpVQqdsuF+6WrIdg/wAnYnWAzy/OMk8dC7M6ki5xlHsT+MSibw7oXhQXYlcSTwEDsPgaZ5Z3bNjAZ+7te6UyzgfM/Ksuxz4R/U3YP9AZu82JZ1gHs69X3Y3cSeweYYlnWsqJaudw469odu6L5O+ePDc856UnYnUEHYpfRi8Brbtncjd1JeINSKgW7r80A7GbmvBiKXX67sY9XScBwl/QnsE9qDmOX8ftu83sr6zjs7e0A9rHnE+xtLNMd2MeiZGd93Stnz2NXmJOUUjvzuC45Kop9pQAuKB/LvmX6Qezt/wTQA7uzZGachf7elVJh2Nv3/ALGbXeIEIHLObucZFlWjDPcCftHLdqXcQUip/XhgGVZyhmuiX1XgXa7nuxp3tHYHQ+H5TSdP1FK3Yhdkalg+ehAoOx+L0+494cRgU8pNQL7uy3qloAS5w/7SkEopWZh94cpcEuIdFwMcJZlfYJdOxdFzKkYROVx2vkUorZeEpRSV2FfG/0Ru9PT08AHgXTQE6IklJZ9xbKsxwubh1xuKH1iCewnHPpSEnZnzNKqKnaTfSqwBfgBu7lTCJGd7CsOudwghBBCCI+kJUEIIYQQHkklQQghhBAeSSVBCCGEEB5JJUEIIYQQHkklQQghhBAe/T9sVuQEvmdUNQAAAABJRU5ErkJggg==\n", - "text/plain": "
" - }, - "metadata": { - "needs_background": "light" - } - } - ], - "execution_count": 4, - "metadata": {}, - "id": "136abf8b" - }, - { - "cell_type": "code", - "source": [], - "outputs": [], - "execution_count": null, - "metadata": {}, - "id": "13285d6a" - } - ], - "metadata": { - "kernelspec": { - "name": "python38-azureml", - "language": "python", - "display_name": "Python 3.8 - AzureML" - }, - "language_info": { - "name": "python", - "version": "3.8.1", - "mimetype": "text/x-python", - "codemirror_mode": { - "name": "ipython", - "version": 3 - }, - "pygments_lexer": "ipython3", - "nbconvert_exporter": "python", - "file_extension": ".py" - }, - "kernel_info": { - "name": "python38-azureml" - }, - "nteract": { - "version": "nteract-front-end@1.0.0" - } - }, - "nbformat": 4, - "nbformat_minor": 5 -} \ No newline at end of file