1
1
1
Functions Documentation
Overview
About Azure Functions
Getting started
Durable Functions
Serverless comparison
Hosting plan options
Quickstarts
Create your first function
Visual Studio Code
C#
Java
JavaScript
PowerShell
Python
TypeScript
Other (Go/Rust)
Visual Studio
Command line
C#
Java
JavaScript
PowerShell
Python
TypeScript
Bicep
ARM template
Azure Arc (preview)
Publish code project
Publish Linux container
Connect to a database
Visual Studio Code
Connect to storage
Visual Studio Code
Visual Studio
Command line
Tutorials
Functions with Logic Apps
Develop Python functions with VS Code
Create serverless APIs using Visual Studio
Networking
Connect to a Virtual Network
Establish private site access
Use an outbound NAT gateway
Identity-based connections
Use identity for host connections
Use identity for triggers and bindings
Access Azure SQL with managed identity
Image resize with Event Grid
Create a serverless web app
Machine learning with TensorFlow
Image classification with PyTorch
Create a custom Linux image
Functions on IoT Edge device
Java with Azure Cosmos DB and Event Hubs
Samples
Azure Serverless Community Library
Azure Samples
C#
Java
JavaScript
PowerShell
Python
TypeScript
Azure CLI
CLI sample index
Create function app
Serverless function app
Serverless Python app
Scalable Premium plan app
Dedicated (App Service) plan app
Integrate services
Connect Azure Storage
Connect Azure Cosmos DB
Python mount Files share
Continuous deployment
GitHub deployment
Azure PowerShell
Concepts
Best practices
General best practices
Performance and reliability
Manage connections
Storage considerations
Error handling and function retries
Security
Compare runtime versions
Hosting and scale
Consumption plan
Premium plan
Dedicated plan
Deployments
Events and messaging
Connect to services
Event-driven scaling
Reliable event processing
Concurrency
Designing for identical input
Triggers and bindings
About triggers and bindings
Binding example
Register binding extensions
Binding expression patterns
Use binding return values
Handle binding errors
Frameworks
Express.js
Security
Security overview
Security baseline
Monitor function executions
Diagnostics
Consumption plan costs
Functions Proxies
Networking options
IP addresses
Custom handlers
High availability
Languages
Supported languages
C#
In-process
Isolated process
Script (.csx)
F#
JavaScript
Java
PowerShell
Python
TypeScript
How-to guides
Develop
Developer guide
Local development
Develop and debug locally
Visual Studio Code development
Visual Studio development
Core Tools development
Event Grid Blob Trigger local development
Create functions
HTTP trigger
Azure portal
Command line
Visual Studio
Visual Studio Code
Java using Gradle
Java using Eclipse
Java using IntelliJ IDEA
Kotlin using Maven
Kotlin using IntelliJ
Linux App Service plan
Linux Consumption plan
Premium plan
Azure for Students Starter
Azure Cosmos DB trigger
Blob storage trigger
Queue storage trigger
Timer trigger
Connect to services
How to connect to services
Azure Cosmos DB - portal
Storage
Azure portal
Visual Studio Code
Visual Studio
Java
Python
Debug
Debug local PowerShell functions
Debug Event Grid trigger locally
Dependency injection
Manage connections
Error handling and retries
Manually run a non HTTP-triggered function
Bring dependencies to function apps
Develop Python worker extensions
Deploy
Continuous deployment
Deployment slots
Build and deploy using Azure Pipelines
Build and deploy using GitHub Actions
Zip deployment
Run from package
Functions in Kubernetes
Automate resource deployment
Deploy using the Jenkins plugin
Configure
Manage a function app
Set the runtime version
Disable a function
Networking
Geo-disaster recovery
Move across regions
Monitor
Monitor function apps with Azure Monitor
Configure monitoring
Analyze telemetry data
Streaming logs
Diagnostic logs
Authenticate
Configure auth providers
Authenticate with Azure AD
Authenticate with Facebook
Authenticate with GitHub
Authenticate with Google
Authenticate with Twitter
Authenticate with an OpenID Connect provider
Authenticate using Sign in with Apple (Preview)
Customize sign-ins/outs
Access user identities
Work with tokens
Manage API versions
File-based configuration
Secure
Add SSL cert
Restrict IPs
Use a managed identity
Reference secrets from Key Vault
Encrypt site data
Integrate
Connect to services
Azure Cosmos DB - portal
Storage
Azure portal
Visual Studio Code
Visual Studio
Python
SignalR
C#
Java
JavaScript
Python
Work with Event Grid
Start/Stop VMs
Overview
Deploy and configure
Manage and monitor
Remove Start/Stop VMs
Troubleshoot
Connect to SQL Database
Connect to a virtual Network
Create OpenAPI definitions
API Management integration (portal)
Visual Studio with API Management (C#)
Use a managed identity
Customize HTTP function endpoint
Manage on-premises resources
Troubleshoot
Storage connections
Azure Cosmos DB bindings
Python functions
General troubleshooting
Scale and performance
Memory profiling
Reference
API references
ARM template
Azure CLI
Azure Functions Core Tools
Azure PowerShell
Java
Python
App settings reference
Triggers and bindings
Blob storage
Overview
Trigger
Input
Output
Azure Cosmos DB
Functions 1.x
Functions 2.x and higher
Overview
Trigger
Input
Output
Azure SQL
Overview
Input
Output
Dapr
Event Grid
Overview
Trigger
Output
Event Hubs
Overview
Trigger
Output
IoT Hub
Overview
Trigger
Kafka
Overview
Trigger
Output
HTTP and webhooks
Overview
Trigger
Output
Mobile Apps
Notification Hubs
Queue storage
Overview
Trigger
Output
RabbitMQ
Overview
Trigger
Output
SendGrid
Service Bus
Overview
Trigger
Output
SignalR Service
Overview
Trigger
Input
Output
Table storage
Overview
Input
Output
Timer
Twilio
Warmup
Errors and diagnostics
.NET worker rules
AZFW0001
SDK rules
AZF0001
AZF0002
host.json 2.x reference
host.json 1.x reference
Monitoring data
Networking FAQ
Resources
Build your skills with Microsoft Learn
Architecture guidance
Azure Roadmap
Pricing
Language support policy
Pricing calculator
Quota information
Regional availability
Videos
Microsoft Q&A question page
Stack Overflow
Twitter
Provide product feedback
Azure Functions GitHub repository
Azure updates
Introduction to Azure Functions
8/2/2022 • 2 minutes to read • Edit Online
Azure Functions is a serverless solution that allows you to write less code, maintain less infrastructure, and save
on costs. Instead of worrying about deploying and maintaining servers, the cloud infrastructure provides all the
up-to-date resources needed to keep your applications running.
You focus on the pieces of code that matter most to you, and Azure Functions handles the rest.
We often build systems to react to a series of critical events. Whether you're building a web API, responding to
database changes, processing IoT data streams, or even managing message queues - every application needs a
way to run some code as these events occur.
To meet this need, Azure Functions provides "compute on-demand" in two significant ways.
First, Azure Functions allows you to implement your system's logic into readily available blocks of code. These
code blocks are called "functions". Different functions can run anytime you need to respond to critical events.
Second, as requests increase, Azure Functions meets the demand with as many resources and function instances
as necessary - but only while needed. As requests fall, any extra resources and application instances drop off
automatically.
Where do all the compute resources come from? Azure Functions provides as many or as few compute
resources as needed to meet your application's demand.
Providing compute resources on-demand is the essence of serverless computing in Azure Functions.
Scenarios
In many cases, a function integrates with an array of cloud services to provide feature-rich implementations.
The following are a common, but by no means exhaustive, set of scenarios for Azure Functions.
IF Y O U WA N T TO. . . T H EN . . .
Build a web API Implement an endpoint for your web applications using the
HTTP trigger
Process file uploads Run code when a file is uploaded or changed in blob storage
Build a ser verless workflow Chain a series of functions together using durable functions
Respond to database changes Run custom logic when a document is created or updated in
Cosmos DB
Create reliable message queue systems Process message queues using Queue Storage, Service Bus,
or Event Hubs
IF Y O U WA N T TO. . . T H EN . . .
Analyze IoT data streams Collect and process data from IoT devices
Process data in real time Use Functions and SignalR to respond to data in the
moment
As you build your functions, you have the following options and resources available:
Use your preferred language : Write functions in C#, Java, JavaScript, PowerShell, or Python, or use a
custom handler to use virtually any other language.
Automate deployment : From a tools-based approach to using external pipelines, there's a myriad of
deployment options available.
Troubleshoot a function : Use monitoring tools and testing strategies to gain insights into your apps.
Flexible pricing options : With the Consumption plan, you only pay while your functions are running,
while the Premium and App Service plans offer features for specialized needs.
Next Steps
Get started through lessons, samples, and interactive tutorials
Getting started with Azure Functions
8/2/2022 • 3 minutes to read • Edit Online
Introduction
Azure Functions allows you to implement your system's logic into readily-available blocks of code. These code
blocks are called "functions".
Use the following resources to get started.
A C T IO N RESO URC ES
Visual Studio
Visual Studio Code
Command line
Explore an interactive tutorial Choose the best Azure serverless technology for your
business scenario
Well-Architected Framework - Performance efficiency
Execute an Azure Function with triggers
A C T IO N RESO URC ES
Explore an interactive tutorial Choose the best Azure serverless technology for your
business scenario
Well-Architected Framework - Performance efficiency
Develop an App using the Maven Plugin for Azure
Functions
A C T IO N RESO URC ES
Explore an interactive tutorial Choose the best Azure serverless technology for your
business scenario
Well-Architected Framework - Performance efficiency
Build Serverless APIs with Azure Functions
Create serverless logic with Azure Functions
Refactor Node.js and Express APIs to Serverless APIs with
Azure Functions
A C T IO N RESO URC ES
Explore an interactive tutorial Choose the best Azure serverless technology for your
business scenario
Well-Architected Framework - Performance efficiency
Build Serverless APIs with Azure Functions
Create serverless logic with Azure Functions
Execute an Azure Function with triggers
A C T IO N RESO URC ES
Explore an interactive tutorial Choose the best Azure serverless technology for your
business scenario
Well-Architected Framework - Performance efficiency
Build Serverless APIs with Azure Functions
Create serverless logic with Azure Functions
Next steps
Learn about the anatomy of an Azure Functions application
What are Durable Functions?
8/2/2022 • 22 minutes to read • Edit Online
Durable Functions is an extension of Azure Functions that lets you write stateful functions in a serverless
compute environment. The extension lets you define stateful workflows by writing orchestrator functions and
stateful entities by writing entity functions using the Azure Functions programming model. Behind the scenes,
the extension manages state, checkpoints, and restarts for you, allowing you to focus on your business logic.
Supported languages
Durable Functions is designed to work with all Azure Functions programming languages but may have different
minimum requirements for each language. The following table shows the minimum supported app
configurations:
A Z URE F UN C T IO N S L A N GUA GE W O RK ER M IN IM UM B UN DL ES
L A N GUA GE STA C K RUN T IM E VERSIO N S VERSIO N VERSIO N
Like Azure Functions, there are templates to help you develop Durable Functions using Visual Studio 2019,
Visual Studio Code, and the Azure portal.
Application patterns
The primary use case for Durable Functions is simplifying complex, stateful coordination requirements in
serverless applications. The following sections describe typical application patterns that can benefit from
Durable Functions:
Function chaining
Fan-out/fan-in
Async HTTP APIs
Monitoring
Human interaction
Aggregator (stateful entities)
Pattern #1: Function chaining
In the function chaining pattern, a sequence of functions executes in a specific order. In this pattern, the output of
one function is applied to the input of another function.
You can use Durable Functions to implement the function chaining pattern concisely as shown in the following
example.
In this example, the values F1 , F2 , F3 , and F4 are the names of other functions in the same function app.
You can implement control flow by using normal imperative coding constructs. Code executes from the top
down. The code can involve existing language control flow semantics, like conditionals and loops. You can
include error handling logic in try / catch / finally blocks.
C#
JavaScript
Python
PowerShell
Java
[FunctionName("Chaining")]
public static async Task<object> Run(
[OrchestrationTrigger] IDurableOrchestrationContext context)
{
try
{
var x = await context.CallActivityAsync<object>("F1", null);
var y = await context.CallActivityAsync<object>("F2", x);
var z = await context.CallActivityAsync<object>("F3", y);
return await context.CallActivityAsync<object>("F4", z);
}
catch (Exception)
{
// Error handling or compensation goes here.
}
}
You can use the context parameter to invoke other functions by name, pass parameters, and return function
output. Each time the code calls await , the Durable Functions framework checkpoints the progress of the
current function instance. If the process or virtual machine recycles midway through the execution, the function
instance resumes from the preceding await call. For more information, see the next section, Pattern #2: Fan
out/fan in.
Pattern #2: Fan out/fan in
In the fan out/fan in pattern, you execute multiple functions in parallel and then wait for all functions to finish.
Often, some aggregation work is done on the results that are returned from the functions.
With normal functions, you can fan out by having the function send multiple messages to a queue. Fanning back
in is much more challenging. To fan in, in a normal function, you write code to track when the queue-triggered
functions end, and then store function outputs.
The Durable Functions extension handles this pattern with relatively simple code:
C#
JavaScript
Python
PowerShell
Java
[FunctionName("FanOutFanIn")]
public static async Task Run(
[OrchestrationTrigger] IDurableOrchestrationContext context)
{
var parallelTasks = new List<Task<int>>();
await Task.WhenAll(parallelTasks);
The fan-out work is distributed to multiple instances of the F2 function. The work is tracked by using a dynamic
list of tasks. Task.WhenAll is called to wait for all the called functions to finish. Then, the F2 function outputs
are aggregated from the dynamic task list and passed to the F3 function.
The automatic checkpointing that happens at the await call on Task.WhenAll ensures that a potential midway
crash or reboot doesn't require restarting an already completed task.
NOTE
In rare circumstances, it's possible that a crash could happen in the window after an activity function completes but before
its completion is saved into the orchestration history. If this happens, the activity function would re-run from the
beginning after the process recovers.
Durable Functions provides built-in suppor t for this pattern, simplifying or even removing the code you need
to write to interact with long-running function executions. For example, the Durable Functions quickstart
samples (C#, JavaScript, Python, PowerShell, and Java) show a simple REST command that you can use to start
new orchestrator function instances. After an instance starts, the extension exposes webhook HTTP APIs that
query the orchestrator function status.
The following example shows REST commands that start an orchestrator and query its status. For clarity, some
protocol details are omitted from the example.
> curl -X POST https://myfunc.azurewebsites.net/api/orchestrators/DoWork -H "Content-Length: 0" -i
HTTP/1.1 202 Accepted
Content-Type: application/json
Location:
https://myfunc.azurewebsites.net/runtime/webhooks/durabletask/instances/b79baf67f717453ca9e86c5da21e03ec
{"id":"b79baf67f717453ca9e86c5da21e03ec", ...}
> curl
https://myfunc.azurewebsites.net/runtime/webhooks/durabletask/instances/b79baf67f717453ca9e86c5da21e03ec -i
HTTP/1.1 202 Accepted
Content-Type: application/json
Location:
https://myfunc.azurewebsites.net/runtime/webhooks/durabletask/instances/b79baf67f717453ca9e86c5da21e03ec
{"runtimeStatus":"Running","lastUpdatedTime":"2019-03-16T21:20:47Z", ...}
> curl
https://myfunc.azurewebsites.net/runtime/webhooks/durabletask/instances/b79baf67f717453ca9e86c5da21e03ec -i
HTTP/1.1 200 OK
Content-Length: 175
Content-Type: application/json
{"runtimeStatus":"Completed","lastUpdatedTime":"2019-03-16T21:20:57Z", ...}
Because the Durable Functions runtime manages state for you, you don't need to implement your own status-
tracking mechanism.
The Durable Functions extension exposes built-in HTTP APIs that manage long-running orchestrations. You can
alternatively implement this pattern yourself by using your own function triggers (such as HTTP, a queue, or
Azure Event Hubs) and the orchestration client binding. For example, you might use a queue message to trigger
termination. Or, you might use an HTTP trigger that's protected by an Azure Active Directory authentication
policy instead of the built-in HTTP APIs that use a generated key for authentication.
For more information, see the HTTP features article, which explains how you can expose asynchronous, long-
running processes over HTTP using the Durable Functions extension.
Pattern #4: Monitor
The monitor pattern refers to a flexible, recurring process in a workflow. An example is polling until specific
conditions are met. You can use a regular timer trigger to address a basic scenario, such as a periodic cleanup
job, but its interval is static and managing instance lifetimes becomes complex. You can use Durable Functions
to create flexible recurrence intervals, manage task lifetimes, and create multiple monitor processes from a
single orchestration.
An example of the monitor pattern is to reverse the earlier async HTTP API scenario. Instead of exposing an
endpoint for an external client to monitor a long-running operation, the long-running monitor consumes an
external endpoint, and then waits for a state change.
In a few lines of code, you can use Durable Functions to create multiple monitors that observe arbitrary
endpoints. The monitors can end execution when a condition is met, or another function can use the durable
orchestration client to terminate the monitors. You can change a monitor's wait interval based on a specific
condition (for example, exponential backoff.)
The following code implements a basic monitor:
C#
JavaScript
Python
PowerShell
Java
[FunctionName("MonitorJobStatus")]
public static async Task Run(
[OrchestrationTrigger] IDurableOrchestrationContext context)
{
int jobId = context.GetInput<int>();
int pollingInterval = GetPollingInterval();
DateTime expiryTime = GetExpiryTime();
When a request is received, a new orchestration instance is created for that job ID. The instance polls a status
until either a condition is met or until a timeout expires. A durable timer controls the polling interval. Then, more
work can be performed, or the orchestration can end.
Pattern #5: Human interaction
Many automated processes involve some kind of human interaction. Involving humans in an automated process
is tricky because people aren't as highly available and as responsive as cloud services. An automated process
might allow for this interaction by using timeouts and compensation logic.
An approval process is an example of a business process that involves human interaction. Approval from a
manager might be required for an expense report that exceeds a certain dollar amount. If the manager doesn't
approve the expense report within 72 hours (maybe the manager went on vacation), an escalation process kicks
in to get the approval from someone else (perhaps the manager's manager).
You can implement the pattern in this example by using an orchestrator function. The orchestrator uses a
durable timer to request approval. The orchestrator escalates if timeout occurs. The orchestrator waits for an
external event, such as a notification that's generated by a human interaction.
These examples create an approval process to demonstrate the human interaction pattern:
C#
JavaScript
Python
PowerShell
Java
[FunctionName("ApprovalWorkflow")]
public static async Task Run(
[OrchestrationTrigger] IDurableOrchestrationContext context)
{
await context.CallActivityAsync("RequestApproval", null);
using (var timeoutCts = new CancellationTokenSource())
{
DateTime dueTime = context.CurrentUtcDateTime.AddHours(72);
Task durableTimeout = context.CreateTimer(dueTime, timeoutCts.Token);
An external client can deliver the event notification to a waiting orchestrator function by using the built-in HTTP
APIs:
curl -d "true"
http://localhost:7071/runtime/webhooks/durabletask/instances/{instanceId}/raiseEvent/ApprovalEvent -H
"Content-Type: application/json"
An event can also be raised using the durable orchestration client from another function in the same function
app:
C#
JavaScript
Python
PowerShell
Java
[FunctionName("RaiseEventToOrchestration")]
public static async Task Run(
[HttpTrigger] string instanceId,
[DurableClient] IDurableOrchestrationClient client)
{
bool isApproved = true;
await client.RaiseEventAsync(instanceId, "ApprovalEvent", isApproved);
}
The tricky thing about trying to implement this pattern with normal, stateless functions is that concurrency
control becomes a huge challenge. Not only do you need to worry about multiple threads modifying the same
data at the same time, you also need to worry about ensuring that the aggregator only runs on a single VM at a
time.
You can use Durable entities to easily implement this pattern as a single function.
C#
JavaScript
Python
PowerShell
Java
[FunctionName("Counter")]
public static void Counter([EntityTrigger] IDurableEntityContext ctx)
{
int currentValue = ctx.GetState<int>();
switch (ctx.OperationName.ToLowerInvariant())
{
case "add":
int amount = ctx.GetInput<int>();
ctx.SetState(currentValue + amount);
break;
case "reset":
ctx.SetState(0);
break;
case "get":
ctx.Return(currentValue);
break;
}
}
Durable entities can also be modeled as classes in .NET. This model can be useful if the list of operations is fixed
and becomes large. The following example is an equivalent implementation of the Counter entity using .NET
classes and methods.
[FunctionName(nameof(Counter))]
public static Task Run([EntityTrigger] IDurableEntityContext ctx)
=> ctx.DispatchAsync<Counter>();
}
Clients can enqueue operations for (also known as "signaling") an entity function using the entity client binding.
C#
JavaScript
Python
PowerShell
Java
[FunctionName("EventHubTriggerCSharp")]
public static async Task Run(
[EventHubTrigger("device-sensor-events")] EventData eventData,
[DurableClient] IDurableEntityClient entityClient)
{
var metricType = (string)eventData.Properties["metric"];
var delta = BitConverter.ToInt32(eventData.Body, eventData.Body.Offset);
NOTE
Dynamically generated proxies are also available in .NET for signaling entities in a type-safe way. And in addition to
signaling, clients can also query for the state of an entity function using type-safe methods on the orchestration client
binding.
Entity functions are available in Durable Functions 2.0 and above for C#, JavaScript, and Python.
The technology
Behind the scenes, the Durable Functions extension is built on top of the Durable Task Framework, an open-
source library on GitHub that's used to build workflows in code. Like Azure Functions is the serverless evolution
of Azure WebJobs, Durable Functions is the serverless evolution of the Durable Task Framework. Microsoft and
other organizations use the Durable Task Framework extensively to automate mission-critical processes. It's a
natural fit for the serverless Azure Functions environment.
Code constraints
In order to provide reliable and long-running execution guarantees, orchestrator functions have a set of coding
rules that must be followed. For more information, see the Orchestrator function code constraints article.
Billing
Durable Functions are billed the same as Azure Functions. For more information, see Azure Functions pricing.
When executing orchestrator functions in the Azure Functions Consumption plan, there are some billing
behaviors to be aware of. For more information on these behaviors, see the Durable Functions billing article.
Jump right in
You can get started with Durable Functions in under 10 minutes by completing one of these language-specific
quickstart tutorials:
C# using Visual Studio 2019
JavaScript using Visual Studio Code
Python using Visual Studio Code
PowerShell using Visual Studio Code
Java using Maven
In these quickstarts, you locally create and test a "hello world" durable function. You then publish the function
code to Azure. The function you create orchestrates and chains together calls to other functions.
Publications
Durable Functions is developed in collaboration with Microsoft Research. As a result, the Durable Functions
team actively produces research papers and artifacts; these include:
Durable Functions: Semantics for Stateful Serverless (OOPSLA'21)
Serverless Workflows with Durable Functions and Netherite (pre-print)
Learn more
The following video highlights the benefits of Durable Functions:
For a more in-depth discussion of Durable Functions and the underlying technology, see the following video (it's
focused on .NET, but the concepts also apply to other supported languages):
Because Durable Functions is an advanced extension for Azure Functions, it isn't appropriate for all applications.
For a comparison with other Azure orchestration technologies, see Compare Azure Functions and Azure Logic
Apps.
Next steps
Durable Functions function types and features
Choose the right integration and automation
services in Azure
8/2/2022 • 6 minutes to read • Edit Online
P O W ER A UTO M AT E LO GIC A P P S
Users Office workers, business users, Pro integrators and developers, IT pros
SharePoint administrators
Design tool In-browser and mobile app, UI only In-browser, Visual Studio Code, and
Visual Studio with code view available
Application lifecycle management Design and test in non-production Azure DevOps: source control, testing,
(ALM) environments, promote to production support, automation, and
when ready manageability in Azure Resource
Manager
P O W ER A UTO M AT E LO GIC A P P S
Security Microsoft 365 security audit logs, DLP, Security assurance of Azure: Azure
encryption at rest for sensitive data security, Microsoft Defender for Cloud,
audit logs
DURA B L E F UN C T IO N S LO GIC A P P S
Actions Each activity is an Azure function; write Large collection of ready-made actions
code for activity functions
Management REST API, Visual Studio Azure portal, REST API, PowerShell,
Visual Studio
Execution context Can run locally or in the cloud Runs only in the cloud
F UN C T IO N S W EB JO B S W IT H W EB JO B S SDK
Pay-per-use pricing ✔
1 WebJobs (without the WebJobs SDK) supports C#, Java, JavaScript, Bash, .cmd, .bat, PowerShell, PHP,
TypeScript, Python, and more. This is not a comprehensive list. A WebJob can run any program or script that can
run in the App Service sandbox.
2 WebJobs (without the WebJobs SDK) supports NPM and NuGet.
Summary
Azure Functions offers more developer productivity than Azure App Service WebJobs does. It also offers more
options for programming languages, development environments, Azure service integration, and pricing. For
most scenarios, it's the best choice.
Here are two scenarios for which WebJobs may be the best choice:
You need more control over the code that listens for events, the JobHost object. Functions offers a limited
number of ways to customize JobHost behavior in the host.json file. Sometimes you need to do things that
can't be specified by a string in a JSON file. For example, only the WebJobs SDK lets you configure a custom
retry policy for Azure Storage.
You have an App Service app for which you want to run code snippets, and you want to manage them
together in the same Azure DevOps environment.
For other scenarios where you want to run code snippets for integrating Azure or third-party services, choose
Azure Functions over WebJobs with the WebJobs SDK.
Next steps
Get started by creating your first flow, logic app, or function app. Select any of the following links:
Get started with Power Automate
Create a logic app
Create your first Azure function
Azure Functions hosting options
8/2/2022 • 9 minutes to read • Edit Online
When you create a function app in Azure, you must choose a hosting plan for your app. There are three basic
hosting plans available for Azure Functions: Consumption plan, Premium plan, and Dedicated (App Service)
plan. All hosting plans are generally available (GA) on both Linux and Windows virtual machines.
The hosting plan you choose dictates the following behaviors:
How your function app is scaled.
The resources available to each function app instance.
Support for advanced functionality, such as Azure Virtual Network connectivity.
This article provides a detailed comparison between the various hosting plans, along with Kubernetes-based
hosting.
NOTE
If you choose to host your functions in a Kubernetes cluster, consider using an Azure Arc-enabled Kubernetes cluster.
Hosting on an Azure Arc-enabled Kubernetes cluster is currently in preview. To learn more, see App Service, Functions,
and Logic Apps on Azure Arc.
Overview of plans
The following is a summary of the benefits of the three main hosting plans for Functions:
PLAN B EN EF IT S
Consumption plan Scale automatically and only pay for compute resources
when your functions are running.
Dedicated plan Run your functions within an App Service plan at regular
App Service plan rates.
The comparison tables in this article also include the following hosting options, which provide the highest
amount of control and isolation in which to run your function apps.
H O ST IN G O P T IO N DETA IL S
The remaining tables in this article compare the plans on various features and behaviors. For a cost comparison
between dynamic hosting plans (Consumption and Premium), see the Azure Functions pricing page. For pricing
of the various Dedicated plan options, see the App Service pricing page.
Operating system/runtime
The following table shows operating system and language support for the hosting plans.
L IN UX 1,2 L IN UX 1,2,3
C O DE- O N LY W IN DO W S C O DE- O N LY DO C K ER C O N TA IN ER
Premium plan C# C# C#
JavaScript JavaScript JavaScript
Java Java Java
Python PowerShell Core PowerShell Core
PowerShell Core TypeScript Python
TypeScript TypeScript
Dedicated plan C# C# C#
JavaScript JavaScript JavaScript
Java Java Java
Python PowerShell Core PowerShell Core
TypeScript TypeScript Python
TypeScript
ASE C# C# C#
JavaScript JavaScript JavaScript
Java Java Java
Python PowerShell Core PowerShell Core
TypeScript TypeScript Python
TypeScript
1 Linux is the only supported operating system for the Python runtime stack.
2 PowerShell support on Linux is currently in preview.
3 Linux is the only supported operating system for Docker containers.
Consumption plan 5 10
1 Regardless of the function app timeout setting, 230 seconds is the maximum amount of time that an HTTP
triggered function can take to respond to a request. This is because of the default idle timeout of Azure Load
Balancer. For longer processing times, consider using the Durable Functions async pattern or defer the actual
work and return an immediate response.
2 The default timeout for version 1.x of the Functions runtime is unlimited.
Scale
The following table compares the scaling behaviors of the various hosting plans.
Maximum instances are given on a per-function app (Consumption) or per-plan (Premium/Dedicated) basis,
unless otherwise indicated.
PLAN SC A L E O UT M A X # IN STA N C ES
1 During scale-out, there's currently a limit of 500 instances per subscription per hour for Linux apps on a
Consumption plan.
2 In some regions, Linux apps on a Premium plan can scale to 40 instances. For more information, see the
Consumption plan Apps may scale to zero when idle, meaning some requests
may have additional latency at startup. The consumption
plan does have some optimizations to help decrease cold
start time, including pulling from pre-warmed placeholder
functions that already have the function host and language
processes running.
Dedicated plan When running in a Dedicated plan, the Functions host can
run continuously, which means that cold start isn't really an
issue.
Service limits
C O N SUM P T IO N DEDIC AT ED
RESO URC E PLAN P REM IUM P L A N PLAN A SE K UB ERN ET ES
App Service 100 per region 100 per resource 100 per resource - -
plans group group
Custom domain unbounded SNI unbounded SNI unbounded SNI unbounded SNI n/a
SSL support SSL connection SSL and 1 IP SSL SSL and 1 IP SSL SSL and 1 IP SSL
included connections connections connections
included included included
1 By default, the timeout for the Functions 1.x runtime in an App Service plan is unbounded.
2 Requires the App Service plan be set to Always On. Pay at standard rates.
3 These limits are set in the host.
4 The actual number of function apps that you can host depends on the activity of the apps, the size of the
machine instances, and the corresponding resource utilization.
5 The storage limit is the total content size in temporary storage across all apps in the same App Service plan.
apps in a Premium plan or an App Service plan, you can map a custom domain using either a CNAME or an A
record.
7 Guaranteed for up to 60 minutes.
8 Workers are roles that host customer apps. Workers are available in three fixed sizes: One vCPU/3.5 GB RAM;
Networking features
C O N SUM P T IO N DEDIC AT ED
F EAT URE PLAN P REM IUM P L A N PLAN A SE K UB ERN ET ES
Billing
PLAN DETA IL S
Consumption plan Pay only for the time your functions run. Billing is based on
number of executions, execution time, and memory used.
Premium plan Premium plan is based on the number of core seconds and
memory used across needed and pre-warmed instances. At
least one instance per plan must always be kept warm. This
plan provides the most predictable pricing.
Dedicated plan You pay the same for function apps in an App Service Plan
as you would for other App Service resources, like web apps.
App Ser vice Environment (ASE) There's a flat monthly rate for an ASE that pays for the
infrastructure and doesn't change with the size of the ASE.
There's also a cost per App Service plan vCPU. All apps
hosted in an ASE are in the Isolated pricing SKU.
Next steps
Deployment technologies in Azure Functions
Azure Functions developer guide
Quickstart: Create a C# function in Azure using
Visual Studio Code
8/2/2022 • 8 minutes to read • Edit Online
In this article, you use Visual Studio Code to create a C# function that responds to HTTP requests. After testing
the code locally, you deploy it to the serverless environment of Azure Functions. This article creates an HTTP
triggered function that runs on .NET 6.0. There's also a CLI-based version of this article.
By default, this article shows you how to create C# functions that runs on .NET 6 in the same process as the
Functions host. These in-process C# functions are only supported on Long Term Support (LTS) versions of .NET,
such as .NET 6. To create C# functions on .NET 6 that can also run on .NET 5.0 and .NET Framework 4.8 (in
preview) in an isolated process, see the alternate version of this article.
Completing this quickstart incurs a small cost of a few USD cents or less in your Azure account.
P RO M P T SEL EC T IO N
Select a template for your project's first function Choose HTTP trigger .
Select how you would like to open your project Select Add to workspace .
NOTE
If you don't see .NET 6 as a runtime option, check the following:
Make sure you have installed the .NET 6.0 SDK.
Press F1 and type Preferences: Open user settings , then search for
Azure Functions: Project Runtime and change the default runtime version to ~4 .
4. Visual Studio Code uses the provided information and generates an Azure Functions project with an HTTP
trigger. You can view the local project files in the Explorer. For more information about the files that are
created, see Generated project files.
3. In the Enter request body , press Enter to send a request message to your function.
4. When the function executes locally and returns a response, a notification is raised in Visual Studio Code.
Information about the function execution is shown in the Terminal panel.
5. Press Ctrl + C to stop Core Tools and disconnect the debugger.
After checking that the function runs correctly on your local computer, it's time to use Visual Studio Code to
publish the project directly to Azure.
Sign in to Azure
Before you can publish your app, you must sign in to Azure.
1. If you aren't already signed in, choose the Azure icon in the Activity bar. Then in the Resources area,
choose Sign in to Azure....
If you're already signed in and can see your existing subscriptions, go to the next section. If you don't yet
have an Azure account, choose Create and Azure Account.... Students can choose Create and Azure
for Students Account....
2. When prompted in the browser, choose your Azure account and sign in using your Azure account
credentials. If you create a new account, you can sign in after your account is created.
3. After you've successfully signed in, you can close the new browser window. The subscriptions that belong
to your Azure account are displayed in the sidebar.
P RO M P T SEL EC T IO N
Select subscription Choose the subscription to use. You won't see this
prompt when you have only one subscription visible
under Resources .
Enter a globally unique name for the function Type a name that is valid in a URL path. The name you
app type is validated to make sure that it's unique in Azure
Functions.
P RO M P T SEL EC T IO N
Select a runtime stack Choose the language version on which you've been
running locally.
Select a location for new resources For better performance, choose a region near you.
The extension shows the status of individual resources as they're being created in Azure in the Azure:
Activity Log panel.
3. When the creation is complete, the following Azure resources are created in your subscription. The
resources are named based on your function app name:
A resource group, which is a logical container for related resources.
A standard Azure Storage account, which maintains state and other information about your projects.
A function app, which provides the environment for executing your function code. A function app lets
you group functions as a logical unit for easier management, deployment, and sharing of resources
within the same hosting plan.
An App Service plan, which defines the underlying host for your function app.
An Application Insights instance connected to the function app, which tracks usage of your functions
in the app.
A notification is displayed after your function app is created and the deployment package is applied.
TIP
By default, the Azure resources required by your function app are created based on the function app name you
provide. By default, they're also created in the same new resource group with the function app. If you want to
either customize the names of these resources or reuse existing resources, you need to publish the project with
advanced create options instead.
1. Choose the Azure icon in the Activity bar, then in the Workspace area, select your project folder and
select the Deploy... button.
2. Select Deploy to Function App..., choose the function app you just created, and select Deploy .
3. After deployment completes, select View Output to view the creation and deployment results, including
the Azure resources that you created. If you miss the notification, select the bell icon in the lower right
corner to see it again.
Clean up resources
When you continue to the next step and add an Azure Storage queue binding to your function, you'll need to
keep all your resources in place to build on what you've already done.
Otherwise, you can use the following steps to delete the function app and its related resources to avoid
incurring any further costs.
1. In Visual Studio Code, press F1 to open the command palette. In the command palette, search for and
select Azure: Open in portal .
2. Choose your function app and press Enter. The function app page opens in the Azure portal.
3. In the Over view tab, select the named link next to Resource group .
4. On the Resource group page, review the list of included resources, and verify that they're the ones you
want to delete.
5. In the Resource group page, review the list of included resources, and verify that they are the ones you
want to delete.
6. Select Delete resource group , and follow the instructions.
Deletion may take a couple of minutes. When it's done, a notification appears for a few seconds. You can
also select the bell icon at the top of the page to view the notification.
For more information about Functions costs, see Estimating Consumption plan costs.
Next steps
You have used Visual Studio Code to create a function app with a simple HTTP-triggered function. In the next
article, you expand that function by connecting to either Azure Cosmos DB or Azure Queue Storage. To learn
more about connecting to other Azure services, see Add bindings to an existing function in Azure Functions.
.NET 6
.NET 6 Isolated
In this article, you use Visual Studio Code to create a Java function that responds to HTTP requests. After testing
the code locally, you deploy it to the serverless environment of Azure Functions.
If Visual Studio Code isn't your preferred development tool, check out our similar tutorials for Java developers:
Gradle
IntelliJ IDEA
Maven
Completing this quickstart incurs a small cost of a few USD cents or less in your Azure account.
P RO M P T SEL EC T IO N
Select a template for your project's first function Choose HTTP trigger .
Select how you would like to open your project Choose Add to workspace .
4. Visual Studio Code uses the provided information and generates an Azure Functions project with an HTTP
trigger. You can view the local project files in the Explorer. For more information about the files that are
created, see Generated project files.
3. In Enter request body you see the request message body value of { "name": "Azure" } . Press Enter to
send this request message to your function.
4. When the function executes locally and returns a response, a notification is raised in Visual Studio Code.
Information about the function execution is shown in Terminal panel.
5. With the Terminal panel focused, press Ctrl + C to stop Core Tools and disconnect the debugger.
After you've verified that the function runs correctly on your local computer, it's time to use Visual Studio Code
to publish the project directly to Azure.
Sign in to Azure
Before you can publish your app, you must sign in to Azure.
1. If you aren't already signed in, choose the Azure icon in the Activity bar. Then in the Resources area,
choose Sign in to Azure....
If you're already signed in and can see your existing subscriptions, go to the next section. If you don't yet
have an Azure account, choose Create and Azure Account.... Students can choose Create and Azure
for Students Account....
2. When prompted in the browser, choose your Azure account and sign in using your Azure account
credentials. If you create a new account, you can sign in after your account is created.
3. After you've successfully signed in, you can close the new browser window. The subscriptions that belong
to your Azure account are displayed in the sidebar.
P RO M P T SEL EC T IO N
Select subscription Choose the subscription to use. You won't see this
prompt when you have only one subscription visible
under Resources .
Enter a globally unique name for the function Type a name that is valid in a URL path. The name you
app type is validated to make sure that it's unique in Azure
Functions.
P RO M P T SEL EC T IO N
Select a runtime stack Choose the language version on which you've been
running locally.
Select a location for new resources For better performance, choose a region near you.
The extension shows the status of individual resources as they're being created in Azure in the Azure:
Activity Log panel.
3. When the creation is complete, the following Azure resources are created in your subscription. The
resources are named based on your function app name:
A resource group, which is a logical container for related resources.
A standard Azure Storage account, which maintains state and other information about your projects.
A function app, which provides the environment for executing your function code. A function app lets
you group functions as a logical unit for easier management, deployment, and sharing of resources
within the same hosting plan.
An App Service plan, which defines the underlying host for your function app.
An Application Insights instance connected to the function app, which tracks usage of your functions
in the app.
A notification is displayed after your function app is created and the deployment package is applied.
TIP
By default, the Azure resources required by your function app are created based on the function app name you
provide. By default, they're also created in the same new resource group with the function app. If you want to
either customize the names of these resources or reuse existing resources, you need to publish the project with
advanced create options instead.
1. Choose the Azure icon in the Activity bar, then in the Workspace area, select your project folder and
select the Deploy... button.
2. Select Deploy to Function App..., choose the function app you just created, and select Deploy .
3. After deployment completes, select View Output to view the creation and deployment results, including
the Azure resources that you created. If you miss the notification, select the bell icon in the lower right
corner to see it again.
Clean up resources
When you continue to the next step and add an Azure Storage queue binding to your function, you'll need to
keep all your resources in place to build on what you've already done.
Otherwise, you can use the following steps to delete the function app and its related resources to avoid
incurring any further costs.
1. In Visual Studio Code, press F1 to open the command palette. In the command palette, search for and
select Azure: Open in portal .
2. Choose your function app and press Enter. The function app page opens in the Azure portal.
3. In the Over view tab, select the named link next to Resource group .
4. On the Resource group page, review the list of included resources, and verify that they're the ones you
want to delete.
5. In the Resource group page, review the list of included resources, and verify that they are the ones you
want to delete.
6. Select Delete resource group , and follow the instructions.
Deletion may take a couple of minutes. When it's done, a notification appears for a few seconds. You can
also select the bell icon at the top of the page to view the notification.
For more information about Functions costs, see Estimating Consumption plan costs.
Next steps
You have used Visual Studio Code to create a function app with a simple HTTP-triggered function. In the next
article, you expand that function by connecting to Azure Storage. To learn more about connecting to other Azure
services, see Add bindings to an existing function in Azure Functions.
Connect to an Azure Storage queue
Quickstart: Create a JavaScript function in Azure
using Visual Studio Code
8/2/2022 • 8 minutes to read • Edit Online
Use Visual Studio Code to create a JavaScript function that responds to HTTP requests. Test the code locally, then
deploy it to the serverless environment of Azure Functions.
Completing this quickstart incurs a small cost of a few USD cents or less in your Azure account.
There's also a CLI-based version of this article.
2. Choose the directory location for your project workspace and choose Select . You should either create a
new folder or choose an empty folder for the project workspace. Don't choose a project folder that is
already part of a workspace.
3. Provide the following information at the prompts:
P RO M P T SEL EC T IO N
Select a template for your project's first function Choose HTTP trigger .
Select how you would like to open your project Choose Add to workspace .
Using this information, Visual Studio Code generates an Azure Functions project with an HTTP trigger. You
can view the local project files in the Explorer. To learn more about files that are created, see Generated
project files.
If you have trouble running on Windows, make sure that the default terminal for Visual Studio Code isn't
set to WSL Bash .
2. With Core Tools still running in Terminal , choose the Azure icon in the activity bar. In the Workspace
area, expand Local Project > Functions . Right-click (Windows) or Ctrl - click (macOS) the
HttpExample function and choose Execute Function Now... .
3. In Enter request body you see the request message body value of { "name": "Azure" } . Press Enter to
send this request message to your function.
4. When the function executes locally and returns a response, a notification is raised in Visual Studio Code.
Information about the function execution is shown in Terminal panel.
5. With the Terminal panel focused, press Ctrl + C to stop Core Tools and disconnect the debugger.
After you've verified that the function runs correctly on your local computer, it's time to use Visual Studio Code
to publish the project directly to Azure.
Sign in to Azure
Before you can publish your app, you must sign in to Azure.
1. If you aren't already signed in, choose the Azure icon in the Activity bar. Then in the Resources area,
choose Sign in to Azure....
If you're already signed in and can see your existing subscriptions, go to the next section. If you don't yet
have an Azure account, choose Create and Azure Account.... Students can choose Create and Azure
for Students Account....
2. When prompted in the browser, choose your Azure account and sign in using your Azure account
credentials. If you create a new account, you can sign in after your account is created.
3. After you've successfully signed in, you can close the new browser window. The subscriptions that belong
to your Azure account are displayed in the sidebar.
P RO M P T SEL EC T IO N
Select subscription Choose the subscription to use. You won't see this
prompt when you have only one subscription visible
under Resources .
Enter a globally unique name for the function Type a name that is valid in a URL path. The name you
app type is validated to make sure that it's unique in Azure
Functions.
Select a runtime stack Choose the language version on which you've been
running locally.
Select a location for new resources For better performance, choose a region near you.
The extension shows the status of individual resources as they're being created in Azure in the Azure:
Activity Log panel.
3. When the creation is complete, the following Azure resources are created in your subscription. The
resources are named based on your function app name:
A resource group, which is a logical container for related resources.
A standard Azure Storage account, which maintains state and other information about your projects.
A function app, which provides the environment for executing your function code. A function app lets
you group functions as a logical unit for easier management, deployment, and sharing of resources
within the same hosting plan.
An App Service plan, which defines the underlying host for your function app.
An Application Insights instance connected to the function app, which tracks usage of your functions
in the app.
A notification is displayed after your function app is created and the deployment package is applied.
TIP
By default, the Azure resources required by your function app are created based on the function app name you
provide. By default, they're also created in the same new resource group with the function app. If you want to
either customize the names of these resources or reuse existing resources, you need to publish the project with
advanced create options instead.
1. Choose the Azure icon in the Activity bar, then in the Workspace area, select your project folder and
select the Deploy... button.
2. Select Deploy to Function App..., choose the function app you just created, and select Deploy .
3. After deployment completes, select View Output to view the creation and deployment results, including
the Azure resources that you created. If you miss the notification, select the bell icon in the lower right
corner to see it again.
Run the function in Azure
1. Back in the Resources area in the side bar, expand your subscription, your new function app, and
Functions . Right-click (Windows) or Ctrl - click (macOS) the HttpExample function and choose
Execute Function Now....
2. In Enter request body you see the request message body value of { "name": "Azure" } . Press Enter to
send this request message to your function.
3. When the function executes in Azure and returns a response, a notification is raised in Visual Studio Code.
try {
context.log('JavaScript HTTP trigger function processed a request.');
context.res = {
status: 400
};
return;
}
// Construct response
const responseJSON = {
"name": name,
"sport": sport,
"message": message,
"success": true
}
context.res = {
// status: 200, /* Defaults to 200 */
body: responseJSON,
contentType: 'application/json'
};
} catch(err) {
context.res = {
status: 500
};
}
}
{
"name": "Tom",
"sport": "basketball",
"message": "Tom likes basketball",
"success": true
}
Troubleshooting
Use the table below to resolve the most common issues encountered when using this quickstart.
P RO B L EM SO L UT IO N
Can't create a local function project? Make sure you have the Azure Functions extension installed.
Can't run the function locally? Make sure you have the Azure Functions Core Tools installed
installed.
When running on Windows, make sure that the default
terminal shell for Visual Studio Code isn't set to WSL Bash.
Can't deploy function to Azure? Review the Output for error information. The bell icon in the
lower right corner is another way to view the output. Did
you publish to an existing function app? That action
overwrites the content of that app in Azure.
Couldn't run the cloud-based Function app? Remember to use the query string to send in parameters.
Clean up resources
When you continue to the next step and add an Azure Storage queue binding to your function, you'll need to
keep all your resources in place to build on what you've already done.
Otherwise, you can use the following steps to delete the function app and its related resources to avoid
incurring any further costs.
1. In Visual Studio Code, select the Azure icon to open the Azure explorer.
2. In the Resource Groups section, find your resource group.
3. Right-click the resource group and select Delete .
To learn more about Functions costs, see Estimating Consumption plan costs.
Next steps
You have used Visual Studio Code to create a function app with a simple HTTP-triggered function. In the next
article, you expand that function by connecting to either Azure Cosmos DB or Azure Storage. To learn more
about connecting to other Azure services, see Add bindings to an existing function in Azure Functions. If you
want to learn more about security, see Securing Azure Functions.
Connect to Azure Cosmos DB Connect to Azure Queue Storage
Quickstart: Create a PowerShell function in Azure
using Visual Studio Code
8/2/2022 • 7 minutes to read • Edit Online
In this article, you use Visual Studio Code to create a PowerShell function that responds to HTTP requests. After
testing the code locally, you deploy it to the serverless environment of Azure Functions.
Completing this quickstart incurs a small cost of a few USD cents or less in your Azure account.
There's also a CLI-based version of this article.
2. Choose the directory location for your project workspace and choose Select . You should either create a
new folder or choose an empty folder for the project workspace. Don't choose a project folder that is
already part of a workspace.
3. Provide the following information at the prompts:
P RO M P T SEL EC T IO N
Select a template for your project's first function Choose HTTP trigger .
Select how you would like to open your project Choose Add to workspace .
Using this information, Visual Studio Code generates an Azure Functions project with an HTTP trigger. You
can view the local project files in the Explorer. To learn more about files that are created, see Generated
project files.
If you have trouble running on Windows, make sure that the default terminal for Visual Studio Code isn't
set to WSL Bash .
2. With Core Tools still running in Terminal , choose the Azure icon in the activity bar. In the Workspace
area, expand Local Project > Functions . Right-click (Windows) or Ctrl - click (macOS) the
HttpExample function and choose Execute Function Now... .
3. In Enter request body you see the request message body value of { "name": "Azure" } . Press Enter to
send this request message to your function.
4. When the function executes locally and returns a response, a notification is raised in Visual Studio Code.
Information about the function execution is shown in Terminal panel.
5. With the Terminal panel focused, press Ctrl + C to stop Core Tools and disconnect the debugger.
After you've verified that the function runs correctly on your local computer, it's time to use Visual Studio Code
to publish the project directly to Azure.
Sign in to Azure
Before you can publish your app, you must sign in to Azure.
1. If you aren't already signed in, choose the Azure icon in the Activity bar. Then in the Resources area,
choose Sign in to Azure....
If you're already signed in and can see your existing subscriptions, go to the next section. If you don't yet
have an Azure account, choose Create and Azure Account.... Students can choose Create and Azure
for Students Account....
2. When prompted in the browser, choose your Azure account and sign in using your Azure account
credentials. If you create a new account, you can sign in after your account is created.
3. After you've successfully signed in, you can close the new browser window. The subscriptions that belong
to your Azure account are displayed in the sidebar.
P RO M P T SEL EC T IO N
Select subscription Choose the subscription to use. You won't see this
prompt when you have only one subscription visible
under Resources .
Enter a globally unique name for the function Type a name that is valid in a URL path. The name you
app type is validated to make sure that it's unique in Azure
Functions.
Select a runtime stack Choose the language version on which you've been
running locally.
Select a location for new resources For better performance, choose a region near you.
The extension shows the status of individual resources as they're being created in Azure in the Azure:
Activity Log panel.
3. When the creation is complete, the following Azure resources are created in your subscription. The
resources are named based on your function app name:
A resource group, which is a logical container for related resources.
A standard Azure Storage account, which maintains state and other information about your projects.
A function app, which provides the environment for executing your function code. A function app lets
you group functions as a logical unit for easier management, deployment, and sharing of resources
within the same hosting plan.
An App Service plan, which defines the underlying host for your function app.
An Application Insights instance connected to the function app, which tracks usage of your functions
in the app.
A notification is displayed after your function app is created and the deployment package is applied.
TIP
By default, the Azure resources required by your function app are created based on the function app name you
provide. By default, they're also created in the same new resource group with the function app. If you want to
either customize the names of these resources or reuse existing resources, you need to publish the project with
advanced create options instead.
1. Choose the Azure icon in the Activity bar, then in the Workspace area, select your project folder and
select the Deploy... button.
2. Select Deploy to Function App..., choose the function app you just created, and select Deploy .
3. After deployment completes, select View Output to view the creation and deployment results, including
the Azure resources that you created. If you miss the notification, select the bell icon in the lower right
corner to see it again.
Run the function in Azure
1. Back in the Resources area in the side bar, expand your subscription, your new function app, and
Functions . Right-click (Windows) or Ctrl - click (macOS) the HttpExample function and choose
Execute Function Now....
2. In Enter request body you see the request message body value of { "name": "Azure" } . Press Enter to
send this request message to your function.
3. When the function executes in Azure and returns a response, a notification is raised in Visual Studio Code.
Clean up resources
When you continue to the next step and add an Azure Storage queue binding to your function, you'll need to
keep all your resources in place to build on what you've already done.
Otherwise, you can use the following steps to delete the function app and its related resources to avoid
incurring any further costs.
1. In Visual Studio Code, press F1 to open the command palette. In the command palette, search for and
select Azure: Open in portal .
2. Choose your function app and press Enter. The function app page opens in the Azure portal.
3. In the Over view tab, select the named link next to Resource group .
4. On the Resource group page, review the list of included resources, and verify that they're the ones you
want to delete.
5. In the Resource group page, review the list of included resources, and verify that they are the ones you
want to delete.
6. Select Delete resource group , and follow the instructions.
Deletion may take a couple of minutes. When it's done, a notification appears for a few seconds. You can
also select the bell icon at the top of the page to view the notification.
For more information about Functions costs, see Estimating Consumption plan costs.
Next steps
You have used Visual Studio Code to create a function app with a simple HTTP-triggered function. In the next
article, you expand that function by connecting to Azure Storage. To learn more about connecting to other Azure
services, see Add bindings to an existing function in Azure Functions.
Connect to an Azure Storage queue
Quickstart: Create a function in Azure with Python
using Visual Studio Code
8/2/2022 • 7 minutes to read • Edit Online
In this article, you use Visual Studio Code to create a Python function that responds to HTTP requests. After
testing the code locally, you deploy it to the serverless environment of Azure Functions.
Completing this quickstart incurs a small cost of a few USD cents or less in your Azure account.
There's also a CLI-based version of this article.
2. Choose the directory location for your project workspace and choose Select . You should either create a
new folder or choose an empty folder for the project workspace. Don't choose a project folder that is
already part of a workspace.
3. Provide the following information at the prompts:
P RO M P T SEL EC T IO N
Select a Python interpreter to create a vir tual Choose your preferred Python interpreter. If an option
environment isn't shown, type in the full path to your Python binary.
Select a template for your project's first function Choose HTTP trigger .
Select how you would like to open your project Choose Add to workspace .
4. Visual Studio Code uses the provided information and generates an Azure Functions project with an HTTP
trigger. You can view the local project files in the Explorer. For more information about the files that are
created, see Generated project files.
If you have trouble running on Windows, make sure that the default terminal for Visual Studio Code isn't
set to WSL Bash .
2. With Core Tools still running in Terminal , choose the Azure icon in the activity bar. In the Workspace
area, expand Local Project > Functions . Right-click (Windows) or Ctrl - click (macOS) the
HttpExample function and choose Execute Function Now... .
3. In Enter request body you see the request message body value of { "name": "Azure" } . Press Enter to
send this request message to your function.
4. When the function executes locally and returns a response, a notification is raised in Visual Studio Code.
Information about the function execution is shown in Terminal panel.
5. With the Terminal panel focused, press Ctrl + C to stop Core Tools and disconnect the debugger.
After you've verified that the function runs correctly on your local computer, it's time to use Visual Studio Code
to publish the project directly to Azure.
Sign in to Azure
Before you can publish your app, you must sign in to Azure.
1. If you aren't already signed in, choose the Azure icon in the Activity bar. Then in the Resources area,
choose Sign in to Azure....
If you're already signed in and can see your existing subscriptions, go to the next section. If you don't yet
have an Azure account, choose Create and Azure Account.... Students can choose Create and Azure
for Students Account....
2. When prompted in the browser, choose your Azure account and sign in using your Azure account
credentials. If you create a new account, you can sign in after your account is created.
3. After you've successfully signed in, you can close the new browser window. The subscriptions that belong
to your Azure account are displayed in the sidebar.
P RO M P T SEL EC T IO N
Select subscription Choose the subscription to use. You won't see this
prompt when you have only one subscription visible
under Resources .
Enter a globally unique name for the function Type a name that is valid in a URL path. The name you
app type is validated to make sure that it's unique in Azure
Functions.
Select a runtime stack Choose the language version on which you've been
running locally.
Select a location for new resources For better performance, choose a region near you.
The extension shows the status of individual resources as they're being created in Azure in the Azure:
Activity Log panel.
3. When the creation is complete, the following Azure resources are created in your subscription. The
resources are named based on your function app name:
A resource group, which is a logical container for related resources.
A standard Azure Storage account, which maintains state and other information about your projects.
A function app, which provides the environment for executing your function code. A function app lets
you group functions as a logical unit for easier management, deployment, and sharing of resources
within the same hosting plan.
An App Service plan, which defines the underlying host for your function app.
An Application Insights instance connected to the function app, which tracks usage of your functions
in the app.
A notification is displayed after your function app is created and the deployment package is applied.
TIP
By default, the Azure resources required by your function app are created based on the function app name you
provide. By default, they're also created in the same new resource group with the function app. If you want to
either customize the names of these resources or reuse existing resources, you need to publish the project with
advanced create options instead.
1. Choose the Azure icon in the Activity bar, then in the Workspace area, select your project folder and
select the Deploy... button.
2. Select Deploy to Function App..., choose the function app you just created, and select Deploy .
3. After deployment completes, select View Output to view the creation and deployment results, including
the Azure resources that you created. If you miss the notification, select the bell icon in the lower right
corner to see it again.
Run the function in Azure
1. Back in the Resources area in the side bar, expand your subscription, your new function app, and
Functions . Right-click (Windows) or Ctrl - click (macOS) the HttpExample function and choose
Execute Function Now....
2. In Enter request body you see the request message body value of { "name": "Azure" } . Press Enter to
send this request message to your function.
3. When the function executes in Azure and returns a response, a notification is raised in Visual Studio Code.
Clean up resources
When you continue to the next step and add an Azure Storage queue binding to your function, you'll need to
keep all your resources in place to build on what you've already done.
Otherwise, you can use the following steps to delete the function app and its related resources to avoid
incurring any further costs.
1. In Visual Studio Code, press F1 to open the command palette. In the command palette, search for and
select Azure: Open in portal .
2. Choose your function app and press Enter. The function app page opens in the Azure portal.
3. In the Over view tab, select the named link next to Resource group .
4. On the Resource group page, review the list of included resources, and verify that they're the ones you
want to delete.
5. In the Resource group page, review the list of included resources, and verify that they are the ones you
want to delete.
6. Select Delete resource group , and follow the instructions.
Deletion may take a couple of minutes. When it's done, a notification appears for a few seconds. You can
also select the bell icon at the top of the page to view the notification.
For more information about Functions costs, see Estimating Consumption plan costs.
Next steps
You have used Visual Studio Code to create a function app with a simple HTTP-triggered function. In the next
article, you expand that function by connecting to Azure Storage. To learn more about connecting to other Azure
services, see Add bindings to an existing function in Azure Functions.
Connect to an Azure Storage queue
Having issues? Let us know.
Quickstart: Create a function in Azure with
TypeScript using Visual Studio Code
8/2/2022 • 7 minutes to read • Edit Online
In this article, you use Visual Studio Code to create a TypeScript function that responds to HTTP requests. After
testing the code locally, you deploy it to the serverless environment of Azure Functions.
Completing this quickstart incurs a small cost of a few USD cents or less in your Azure account.
There's also a CLI-based version of this article.
2. Choose the directory location for your project workspace and choose Select . You should either create a
new folder or choose an empty folder for the project workspace. Don't choose a project folder that is
already part of a workspace.
3. Provide the following information at the prompts:
P RO M P T SEL EC T IO N
Select a template for your project's first function Choose HTTP trigger .
Select how you would like to open your project Choose Add to workspace .
Using this information, Visual Studio Code generates an Azure Functions project with an HTTP trigger. You
can view the local project files in the Explorer. To learn more about files that are created, see Generated
project files.
If you have trouble running on Windows, make sure that the default terminal for Visual Studio Code isn't
set to WSL Bash .
2. With Core Tools still running in Terminal , choose the Azure icon in the activity bar. In the Workspace
area, expand Local Project > Functions . Right-click (Windows) or Ctrl - click (macOS) the
HttpExample function and choose Execute Function Now... .
3. In Enter request body you see the request message body value of { "name": "Azure" } . Press Enter to
send this request message to your function.
4. When the function executes locally and returns a response, a notification is raised in Visual Studio Code.
Information about the function execution is shown in Terminal panel.
5. With the Terminal panel focused, press Ctrl + C to stop Core Tools and disconnect the debugger.
After you've verified that the function runs correctly on your local computer, it's time to use Visual Studio Code
to publish the project directly to Azure.
Sign in to Azure
Before you can publish your app, you must sign in to Azure.
1. If you aren't already signed in, choose the Azure icon in the Activity bar. Then in the Resources area,
choose Sign in to Azure....
If you're already signed in and can see your existing subscriptions, go to the next section. If you don't yet
have an Azure account, choose Create and Azure Account.... Students can choose Create and Azure
for Students Account....
2. When prompted in the browser, choose your Azure account and sign in using your Azure account
credentials. If you create a new account, you can sign in after your account is created.
3. After you've successfully signed in, you can close the new browser window. The subscriptions that belong
to your Azure account are displayed in the sidebar.
P RO M P T SEL EC T IO N
Select subscription Choose the subscription to use. You won't see this
prompt when you have only one subscription visible
under Resources .
Enter a globally unique name for the function Type a name that is valid in a URL path. The name you
app type is validated to make sure that it's unique in Azure
Functions.
Select a runtime stack Choose the language version on which you've been
running locally.
Select a location for new resources For better performance, choose a region near you.
The extension shows the status of individual resources as they're being created in Azure in the Azure:
Activity Log panel.
3. When the creation is complete, the following Azure resources are created in your subscription. The
resources are named based on your function app name:
A resource group, which is a logical container for related resources.
A standard Azure Storage account, which maintains state and other information about your projects.
A function app, which provides the environment for executing your function code. A function app lets
you group functions as a logical unit for easier management, deployment, and sharing of resources
within the same hosting plan.
An App Service plan, which defines the underlying host for your function app.
An Application Insights instance connected to the function app, which tracks usage of your functions
in the app.
A notification is displayed after your function app is created and the deployment package is applied.
TIP
By default, the Azure resources required by your function app are created based on the function app name you
provide. By default, they're also created in the same new resource group with the function app. If you want to
either customize the names of these resources or reuse existing resources, you need to publish the project with
advanced create options instead.
1. Choose the Azure icon in the Activity bar, then in the Workspace area, select your project folder and
select the Deploy... button.
2. Select Deploy to Function App..., choose the function app you just created, and select Deploy .
3. After deployment completes, select View Output to view the creation and deployment results, including
the Azure resources that you created. If you miss the notification, select the bell icon in the lower right
corner to see it again.
Run the function in Azure
1. Back in the Resources area in the side bar, expand your subscription, your new function app, and
Functions . Right-click (Windows) or Ctrl - click (macOS) the HttpExample function and choose
Execute Function Now....
2. In Enter request body you see the request message body value of { "name": "Azure" } . Press Enter to
send this request message to your function.
3. When the function executes in Azure and returns a response, a notification is raised in Visual Studio Code.
Clean up resources
When you continue to the next step and add an Azure Storage queue binding to your function, you'll need to
keep all your resources in place to build on what you've already done.
Otherwise, you can use the following steps to delete the function app and its related resources to avoid
incurring any further costs.
1. In Visual Studio Code, press F1 to open the command palette. In the command palette, search for and
select Azure: Open in portal .
2. Choose your function app and press Enter. The function app page opens in the Azure portal.
3. In the Over view tab, select the named link next to Resource group .
4. On the Resource group page, review the list of included resources, and verify that they're the ones you
want to delete.
5. In the Resource group page, review the list of included resources, and verify that they are the ones you
want to delete.
6. Select Delete resource group , and follow the instructions.
Deletion may take a couple of minutes. When it's done, a notification appears for a few seconds. You can
also select the bell icon at the top of the page to view the notification.
For more information about Functions costs, see Estimating Consumption plan costs.
Next steps
You have used Visual Studio Code to create a function app with a simple HTTP-triggered function. In the next
article, you expand that function by connecting to Azure Storage. To learn more about connecting to other Azure
services, see Add bindings to an existing function in Azure Functions.
Connect to an Azure Storage queue
Quickstart: Create a Go or Rust function in Azure
using Visual Studio Code
8/2/2022 • 10 minutes to read • Edit Online
In this article, you use Visual Studio Code to create a custom handler function that responds to HTTP requests.
After testing the code locally, you deploy it to the serverless environment of Azure Functions.
Custom handlers can be used to create functions in any language or runtime by running an HTTP server
process. This article supports both Go and Rust.
Completing this quickstart incurs a small cost of a few USD cents or less in your Azure account.
P RO M P T SEL EC T IO N
Select a template for your project's first function Choose HTTP trigger .
Select how you would like to open your project Choose Add to workspace .
Using this information, Visual Studio Code generates an Azure Functions project with an HTTP trigger. You
can view the local project files in the Explorer.
1. Press Ctrl + N (Cmd + N on macOS) to create a new file. Save it as handler.go in the function app root
(in the same folder as host.json).
2. In handler.go, add the following code and save the file. This is your Go custom handler.
package main
import (
"fmt"
"log"
"net/http"
"os"
)
func main() {
listenAddr := ":8080"
if val, ok := os.LookupEnv("FUNCTIONS_CUSTOMHANDLER_PORT"); ok {
listenAddr = ":" + val
}
http.HandleFunc("/api/HttpExample", helloHandler)
log.Printf("About to listen on %s. Go to https://127.0.0.1%s/", listenAddr, listenAddr)
log.Fatal(http.ListenAndServe(listenAddr, nil))
}
3. Press Ctrl + Shift + ` or select New Terminal from the Terminal menu to open a new integrated
terminal in VS Code.
4. Compile your custom handler using the following command. An executable file named handler (
handler.exe on Windows) is output in the function app root folder.
go build handler.go
"customHandler": {
"description": {
"defaultExecutablePath": "handler",
"workingDirectory": "",
"arguments": []
},
"enableForwardingHttpRequest": true
}
func start
2. With Core Tools running, navigate to the following URL to execute a GET request, which includes
?name=Functions query string.
http://localhost:7071/api/HttpExample?name=Functions
Sign in to Azure
Before you can publish your app, you must sign in to Azure.
1. If you aren't already signed in, choose the Azure icon in the Activity bar. Then in the Resources area,
choose Sign in to Azure....
If you're already signed in and can see your existing subscriptions, go to the next section. If you don't yet
have an Azure account, choose Create and Azure Account.... Students can choose Create and Azure
for Students Account....
2. When prompted in the browser, choose your Azure account and sign in using your Azure account
credentials. If you create a new account, you can sign in after your account is created.
3. After you've successfully signed in, you can close the new browser window. The subscriptions that belong
to your Azure account are displayed in the sidebar.
1. In the integrated terminal, compile the handler to Linux/x64. A binary named handler is created in the
function app root.
macOS
Linux
Windows
P RO M P T SEL EC T IO N
Select subscription Choose the subscription to use. You won't see this when
you have only one subscription visible under Resources .
Enter a globally unique name for the function Type a name that is valid in a URL path. The name you
app type is validated to make sure that it's unique in Azure
Functions.
Select a location for new resources For better performance, choose a region near you.
The extension shows the status of individual resources as they are being created in Azure in the Azure:
Activity Log panel.
3. When the creation is complete, the following Azure resources are created in your subscription. The
resources are named based on your function app name:
A resource group, which is a logical container for related resources.
A standard Azure Storage account, which maintains state and other information about your projects.
A function app, which provides the environment for executing your function code. A function app lets
you group functions as a logical unit for easier management, deployment, and sharing of resources
within the same hosting plan.
An App Service plan, which defines the underlying host for your function app.
An Application Insights instance connected to the function app, which tracks usage of your functions
in the app.
A notification is displayed after your function app is created and the deployment package is applied.
TIP
By default, the Azure resources required by your function app are created based on the function app name you
provide. By default, they're also created in the same new resource group with the function app. If you want to
either customize the names of these resources or reuse existing resources, you need to publish the project with
advanced create options instead.
1. Choose the Azure icon in the Activity bar, then in the Workspace area, select your project folder and
select the Deploy... button.
2. Select Deploy to Function App..., choose the function app you just created, and select Deploy .
3. After deployment completes, select View Output to view the creation and deployment results, including
the Azure resources that you created. If you miss the notification, select the bell icon in the lower right
corner to see it again.
Clean up resources
When you continue to the next step and add an Azure Storage queue binding to your function, you'll need to
keep all your resources in place to build on what you've already done.
Otherwise, you can use the following steps to delete the function app and its related resources to avoid
incurring any further costs.
1. In Visual Studio Code, press F1 to open the command palette. In the command palette, search for and
select Azure: Open in portal .
2. Choose your function app and press Enter. The function app page opens in the Azure portal.
3. In the Over view tab, select the named link next to Resource group .
4. On the Resource group page, review the list of included resources, and verify that they're the ones you
want to delete.
5. In the Resource group page, review the list of included resources, and verify that they are the ones you
want to delete.
6. Select Delete resource group , and follow the instructions.
Deletion may take a couple of minutes. When it's done, a notification appears for a few seconds. You can
also select the bell icon at the top of the page to view the notification.
For more information about Functions costs, see Estimating Consumption plan costs.
Next steps
Learn about Azure Functions custom handlers
Quickstart: Create your first C# function in Azure
using Visual Studio
8/2/2022 • 9 minutes to read • Edit Online
Azure Functions lets you use Visual Studio to create local C# function projects and then easily publish this
project to run in a scalable serverless environment in Azure. If you prefer to develop your C# apps locally using
Visual Studio Code, you should instead consider the Visual Studio Code-based version of this article.
By default, this article shows you how to create C# functions that run on .NET 6 in the same process as the
Functions host. These in-process C# functions are only supported on Long Term Support (LTS) version of .NET,
such as .NET 6. To create C# functions on .NET 6 that can also run on .NET 5.0 and .NET Framework 4.8 (in
preview) in an isolated process, see the alternate version of this article.
In this article, you learn how to:
Use Visual Studio to create a C# class library project on .NET 6.0.
Create a function that responds to HTTP requests.
Run your code locally to verify function behavior.
Deploy your code project to Azure Functions.
Completing this quickstart incurs a small cost of a few USD cents or less in your Azure account.
Prerequisites
Visual Studio 2022, which supports .NET 6.0. Make sure to select the Azure development workload
during installation.
Azure subscription. If you don't already have an account create a free one before you begin.
Make sure you set the Authorization level to Anonymous . If you choose the default level of Function ,
you're required to present the function key in requests to access your function endpoint.
5. Select Create to create the function project and HTTP trigger function.
Visual Studio creates a project and class that contains boilerplate code for the HTTP trigger function type. The
boilerplate code sends an HTTP response that includes a value from the request body or query string. The
HttpTrigger attribute specifies that the function is triggered by an HTTP request.
Your function definition should now look like the following code:
.NET 6
.NET 6 Isolated
[FunctionName("HttpExample")]
public static async Task<IActionResult> Run(
[HttpTrigger(AuthorizationLevel.Anonymous, "get", "post", Route = null)] HttpRequest req,
ILogger log)
Now that you've renamed the function, you can test it on your local computer.
2. Select Azure Function App (Windows) for the Specific target , which creates a function app that runs
on Windows, and then select Next .
3. In the Function Instance , choose Create a new Azure Function...
4. Create a new instance using the values specified in the following table:
Resource group Name of your resource group The resource group in which you
want to create your function app.
Select an existing resource group
from the drop-down list or select
New to create a new resource
group.
7. Select Finish , and on the Publish page, select Publish to deploy the package containing your project files
to your new function app in Azure.
After the deployment completes, the root URL of the function app in Azure is shown in the Publish tab.
8. In the Publish tab, in the Hosting section, choose Open in Azure por tal . This opens the new function
app Azure resource in the Azure portal.
4. Go to this URL and you see a response in the browser to the remote GET request returned by the
function, which looks like the following example:
Clean up resources
Resources in Azure refer to function apps, functions, storage accounts, and so forth. They're grouped into
resource groups, and you can delete everything in a group by deleting the group.
You created Azure resources to complete this quickstart. You may be billed for these resources, depending on
your account status and service pricing. Other quickstarts in this collection build upon this quickstart. If you plan
to work with subsequent quickstarts, tutorials, or with any of the services you have created in this quickstart,
don't clean up the resources.
Use the following steps to delete the function app and its related resources to avoid incurring any further costs.
1. In the Visual Studio Publish dialogue, in the Hosting section, select Open in Azure por tal .
2. In the function app page, select the Over view tab and then select the link under Resource group .
3. In the Resource group page, review the list of included resources, and verify that they're the ones you
want to delete.
4. Select Delete resource group , and follow the instructions.
Deletion may take a couple of minutes. When it's done, a notification appears for a few seconds. You can
also select the bell icon at the top of the page to view the notification.
Next steps
In this quickstart, you used Visual Studio to create and publish a C# function app in Azure with a simple HTTP
trigger function.
.NET 6
.NET 6 Isolated
To learn more about working with C# functions that run in-process with the Functions host, see Develop C#
class library functions using Azure Functions.
Advance to the next article to learn how to add an Azure Storage queue binding to your function:
Add an Azure Storage queue binding to your function
Quickstart: Create a C# function in Azure from the
command line
8/2/2022 • 9 minutes to read • Edit Online
In this article, you use command-line tools to create a C# function that responds to HTTP requests. After testing
the code locally, you deploy it to the serverless environment of Azure Functions.
This article supports creating both types of compiled C# functions:
Isolated process Your function code runs in a separate .NET worker process.
Check out .NET supported versions before getting started.
To learn more, see Develop isolated process functions in C#.
This article creates an HTTP triggered function that runs on .NET 6.0. There is also a Visual Studio Code-based
version of this article.
Completing this quickstart incurs a small cost of a few USD cents or less in your Azure account.
In a terminal or command window, run func --version to check that the Azure Functions Core Tools are
version 4.x.
Run dotnet --list-sdks to check that the required versions are installed.
Run az --version to check that the Azure CLI version is 2.4 or later.
Run az login to sign in to Azure and verify an active subscription.
cd LocalFunctionProj
This folder contains various files for the project, including configurations files named local.settings.json
and host.json. Because local.settings.json can contain secrets downloaded from Azure, the file is excluded
from source control by default in the .gitignore file.
3. Add a function to your project by using the following command, where the --name argument is the
unique name of your function (HttpExample) and the --template argument specifies the function's
trigger (HTTP).
HttpExample.cs contains a Run method that receives request data in the req variable is an HttpRequest that's
decorated with the HttpTriggerAttribute , which defines the trigger behavior.
using System;
using System.IO;
using System.Threading.Tasks;
using Microsoft.AspNetCore.Mvc;
using Microsoft.Azure.WebJobs;
using Microsoft.Azure.WebJobs.Extensions.Http;
using Microsoft.AspNetCore.Http;
using Microsoft.Extensions.Logging;
using Newtonsoft.Json;
namespace LocalFunctionProj
{
public static class HttpExample
{
[FunctionName("HttpExample")]
public static async Task<IActionResult> Run(
[HttpTrigger(AuthorizationLevel.Anonymous, "get", "post", Route = null)] HttpRequest req,
ILogger log)
{
log.LogInformation("C# HTTP trigger function processed a request.");
The return object is an ActionResult that returns a response message as either an OkObjectResult (200) or a
BadRequestObjectResult (400).
To learn more, see Azure Functions HTTP triggers and bindings.
func start
Toward the end of the output, the following lines should appear:
...
Http Functions:
2. Copy the URL of your HttpExample function from this output to a browser:
In-process
Isolated process
To the function URL, append the query string , making the full URL like
?name=<YOUR_NAME>
http://localhost:7071/api/HttpExample?name=Functions . The browser should display a response message
that echoes back your query string value. The terminal in which you started your project also shows log
output as you make requests.
3. When you're done, use Ctrl +C and choose y to stop the functions host.
Azure CLI
Azure PowerShell
az login
Azure CLI
Azure PowerShell
The az group create command creates a resource group. In the above command, replace <REGION> with a
region near you, using an available region code returned from the az account list-locations command.
3. Create a general-purpose storage account in your resource group and region:
Azure CLI
Azure PowerShell
az storage account create --name <STORAGE_NAME> --location <REGION> --resource-group
AzureFunctionsQuickstart-rg --sku Standard_LRS
Azure CLI
Azure PowerShell
NOTE
This command creates a function app using the 3.x version of the Azure Functions runtime. The
func azure functionapp publish command that you'll run later updates the app to version 4.x.
In the previous example, replace <STORAGE_NAME> with the name of the account you used in the previous
step, and replace <APP_NAME> with a globally unique name appropriate to you. The <APP_NAME> is also the
default DNS domain for the function app.
This command creates a function app running in your specified language runtime under the Azure
Functions Consumption Plan, which is free for the amount of usage you incur here. The command also
provisions an associated Azure Application Insights instance in the same resource group, with which you
can monitor your function app and view logs. For more information, see Monitor Azure Functions. The
instance incurs no costs until you activate it.
The publish command shows results similar to the following output (truncated for simplicity):
...
...
Deployment successful.
Remote build succeeded!
Syncing triggers...
Functions in msdocs-azurefunctions-qs:
HttpExample - [httpTrigger]
Invoke url: https://msdocs-azurefunctions-qs.azurewebsites.net/api/httpexample
In-process
Isolated process
Copy the complete Invoke URL shown in the output of the publish command into a browser address bar,
appending the query parameter ?name=Functions . When you navigate to this URL, the browser should display
similar output as when you ran the function locally.
Run the following command to view near real-time streaming logs:
In a separate terminal window or in the browser, call the remote function again. A verbose log of the function
execution in Azure is shown in the terminal.
Clean up resources
If you continue to the next step and add an Azure Storage queue output binding, keep all your resources in place
as you'll build on what you've already done.
Otherwise, use the following command to delete the resource group and all its contained resources to avoid
incurring further costs.
Azure CLI
Azure PowerShell
Next steps
In-process
Isolated process
In this article, you use command-line tools to create a Java function that responds to HTTP requests. After
testing the code locally, you deploy it to the serverless environment of Azure Functions.
If Maven isn't your preferred development tool, check out our similar tutorials for Java developers:
Gradle
IntelliJ IDEA
Visual Studio Code
Completing this quickstart incurs a small cost of a few USD cents or less in your Azure account.
IMPORTANT
Use -DjavaVersion=11 if you want your functions to run on Java 11. To learn more, see Java versions.
The JAVA_HOME environment variable must be set to the install location of the correct version of the JDK to
complete this article.
2. Maven asks you for values needed to finish generating the project on deployment.
Provide the following values when prompted:
P RO M P T VA L UE DESC RIP T IO N
cd fabrikam-functions
This folder contains various files for the project, including configurations files named local.settings.json
and host.json. Because local.settings.json can contain secrets downloaded from Azure, the file is excluded
from source control by default in the .gitignore file.
(Optional) Examine the file contents
If desired, you can skip to Run the function locally and examine the file contents later.
Function.java
Function.java contains a run method that receives request data in the request variable is an
HttpRequestMessage that's decorated with the HttpTrigger annotation, which defines the trigger behavior.
/**
* Copyright (c) Microsoft Corporation. All rights reserved.
* Licensed under the MIT License. See License.txt in the project root for
* license information.
*/
package com.functions;
import com.microsoft.azure.functions.ExecutionContext;
import com.microsoft.azure.functions.HttpMethod;
import com.microsoft.azure.functions.HttpRequestMessage;
import com.microsoft.azure.functions.HttpResponseMessage;
import com.microsoft.azure.functions.HttpStatus;
import com.microsoft.azure.functions.annotation.AuthorizationLevel;
import com.microsoft.azure.functions.annotation.FixedDelayRetry;
import com.microsoft.azure.functions.annotation.FunctionName;
import com.microsoft.azure.functions.annotation.HttpTrigger;
import java.util.Optional;
/**
* Azure Functions with HTTP Trigger.
*/
public class Function {
/**
* This function listens at endpoint "/api/HttpExample". Two ways to invoke it using "curl" command in
bash:
* 1. curl -d "HTTP Body" {your host}/api/HttpExample
* 2. curl "{your host}/api/HttpExample?name=HTTP%20Query"
*/
@FunctionName("HttpExample")
public HttpResponseMessage run(
@HttpTrigger(
name = "req",
methods = {HttpMethod.GET, HttpMethod.POST},
authLevel = AuthorizationLevel.ANONYMOUS)
HttpRequestMessage<Optional<String>> request,
final ExecutionContext context) {
context.getLogger().info("Java HTTP trigger processed a request.");
if (name == null) {
return request.createResponseBuilder(HttpStatus.BAD_REQUEST).body("Please pass a name on the
query string or in the request body").build();
} else {
return request.createResponseBuilder(HttpStatus.OK).body("Hello, " + name).build();
}
}
/**
* This function listens at endpoint "/api/HttpExampleRetry". The function is re-executed in case of
errors until the maximum number of retries occur.
* Retry policies: https://docs.microsoft.com/en-us/azure/azure-functions/functions-bindings-error-
pages?tabs=java
*/
@FunctionName("HttpExampleRetry")
@FixedDelayRetry(maxRetryCount = 3, delayInterval = "00:00:05")
public HttpResponseMessage HttpExampleRetry(
@HttpTrigger(
name = "req",
methods = {HttpMethod.GET, HttpMethod.POST},
authLevel = AuthorizationLevel.ANONYMOUS)
HttpRequestMessage<Optional<String>> request,
final ExecutionContext context) throws Exception {
context.getLogger().info("Java HTTP trigger processed a request.");
if(count<3) {
count ++;
throw new Exception("error");
}
if (name == null) {
return request.createResponseBuilder(HttpStatus.BAD_REQUEST).body("Please pass a name on the
query string or in the request body").build();
} else {
return request.createResponseBuilder(HttpStatus.OK).body(name).build();
}
}
/**
* This function listens at endpoint "/api/HttpTriggerJavaVersion".
* It can be used to verify the Java home and java version currently in use in your Azure function
*/
@FunctionName("HttpTriggerJavaVersion")
public static HttpResponseMessage HttpTriggerJavaVersion(
@HttpTrigger(
name = "req",
methods = {HttpMethod.GET, HttpMethod.POST},
authLevel = AuthorizationLevel.ANONYMOUS)
HttpRequestMessage<Optional<String>> request,
final ExecutionContext context
) {
context.getLogger().info("Java HTTP trigger processed a request.");
final String javaVersion = getJavaVersion();
context.getLogger().info("Function - HttpTriggerJavaVersion" + javaVersion);
return request.createResponseBuilder(HttpStatus.OK).body("HttpTriggerJavaVersion").build();
}
You can change these settings to control how resources are created in Azure, such as by changing runtime.os
from windows to linux before initial deployment. For a complete list of settings supported by the Maven plug-
in, see the configuration details.
FunctionTest.java
The archetype also generates a unit test for your function. When you change your function to add bindings or
add new functions to the project, you'll also need to modify the tests in the FunctionTest.java file.
Toward the end of the output, the following lines should appear:
...
Http Functions:
NOTE
If HttpExample doesn't appear as shown above, you likely started the host from outside the root folder of the
project. In that case, use Ctrl+C to stop the host, navigate to the project's root folder, and run the previous
command again.
2. Copy the URL of your HttpExample function from this output to a browser and append the query string
?name=<YOUR_NAME> , making the full URL like http://localhost:7071/api/HttpExample?name=Functions . The
browser should display a message that echoes back your query string value. The terminal in which you
started your project also shows log output as you make requests.
3. When you're done, use Ctrl +C and choose y to stop the functions host.
TIP
To create a function app running on Linux instead of Windows, change the runtime.os element in the pom.xml file from
windows to linux . Running Linux in a consumption plan is supported in these regions. You can't have apps that run
on Linux and apps that run on Windows in the same resource group.
1. Before you can deploy, sign in to your Azure subscription using either Azure CLI or Azure PowerShell.
Azure CLI
Azure PowerShell
az login
Browser
curl
Copy the complete Invoke URL shown in the output of the publish command into a browser address bar,
appending the query parameter ?name=Functions . The browser should display similar output as when you ran
the function locally.
In a separate terminal window or in the browser, call the remote function again. A verbose log of the function
execution in Azure is shown in the terminal.
Clean up resources
If you continue to the next step and add an Azure Storage queue output binding, keep all your resources in place
as you'll build on what you've already done.
Otherwise, use the following command to delete the resource group and all its contained resources to avoid
incurring further costs.
Azure CLI
Azure PowerShell
az group delete --name java-functions-group
Next steps
Connect to an Azure Storage queue
Quickstart: Create a JavaScript function in Azure
from the command line
8/2/2022 • 8 minutes to read • Edit Online
In this article, you use command-line tools to create a JavaScript function that responds to HTTP requests. After
testing the code locally, you deploy it to the serverless environment of Azure Functions.
Completing this quickstart incurs a small cost of a few USD cents or less in your Azure account.
There is also a Visual Studio Code-based version of this article.
In a terminal or command window, run func --version to check that the Azure Functions Core Tools are
version 4.x.
Run az --version to check that the Azure CLI version is 2.4 or later.
Run az login to sign in to Azure and verify an active subscription.
This folder contains various files for the project, including configurations files named local.settings.json
and host.json. Because local.settings.json can contain secrets downloaded from Azure, the file is excluded
from source control by default in the .gitignore file.
3. Add a function to your project by using the following command, where the --name argument is the
unique name of your function (HttpExample) and the --template argument specifies the function's
trigger (HTTP).
func new creates a subfolder matching the function name that contains a code file appropriate to the
project's chosen language and a configuration file named function.json.
(Optional) Examine the file contents
If desired, you can skip to Run the function locally and examine the file contents later.
index.js
index.js exports a function that's triggered according to the configuration in function.json.
context.res = {
// status: 200, /* Defaults to 200 */
body: responseMessage
};
}
For an HTTP trigger, the function receives request data in the variable req as defined in function.json. The
response is defined as res in function.json and can be accessed using context.res . To learn more, see Azure
Functions HTTP triggers and bindings.
function.json
function.json is a configuration file that defines the input and output bindings for the function, including the
trigger type.
{
"bindings": [
{
"authLevel": "function",
"type": "httpTrigger",
"direction": "in",
"name": "req",
"methods": [
"get",
"post"
]
},
{
"type": "http",
"direction": "out",
"name": "res"
}
]
}
Each binding requires a direction, a type, and a unique name. The HTTP trigger has an input binding of type
httpTrigger and output binding of type http .
func start
Toward the end of the output, the following lines must appear:
...
Http Functions:
NOTE
If HttpExample doesn't appear as shown above, you likely started the host from outside the root folder of the
project. In that case, use Ctrl+C to stop the host, go to the project's root folder, and run the previous command
again.
2. Copy the URL of your HttpExample function from this output to a browser and append the query string
?name=<YOUR_NAME> , making the full URL like http://localhost:7071/api/HttpExample?name=Functions . The
browser should display a response message that echoes back your query string value. The terminal in
which you started your project also shows log output as you make requests.
3. When you're done, press Ctrl + C and type y to stop the functions host.
Create supporting Azure resources for your function
Before you can deploy your function code to Azure, you need to create three resources:
A resource group, which is a logical container for related resources.
A Storage account, which is used to maintain state and other information about your functions.
A function app, which provides the environment for executing your function code. A function app maps to
your local function project and lets you group functions as a logical unit for easier management, deployment,
and sharing of resources.
Use the following commands to create these items. Both Azure CLI and PowerShell are supported.
1. If you haven't done so already, sign in to Azure:
Azure CLI
Azure PowerShell
az login
Azure CLI
Azure PowerShell
The az group create command creates a resource group. In the above command, replace <REGION> with a
region near you, using an available region code returned from the az account list-locations command.
3. Create a general-purpose storage account in your resource group and region:
Azure CLI
Azure PowerShell
Azure CLI
Azure PowerShell
az functionapp create --resource-group AzureFunctionsQuickstart-rg --consumption-plan-location
<REGION> --runtime node --runtime-version 14 --functions-version 4 --name <APP_NAME> --storage-
account <STORAGE_NAME>
The az functionapp create command creates the function app in Azure. If you're using Node.js 16, also
change --runtime-version to 16 .
In the previous example, replace <STORAGE_NAME> with the name of the account you used in the previous
step, and replace <APP_NAME> with a globally unique name appropriate to you. The <APP_NAME> is also the
default DNS domain for the function app.
This command creates a function app running in your specified language runtime under the Azure
Functions Consumption Plan, which is free for the amount of usage you incur here. The command also
provisions an associated Azure Application Insights instance in the same resource group, with which you
can monitor your function app and view logs. For more information, see Monitor Azure Functions. The
instance incurs no costs until you activate it.
The publish command shows results similar to the following output (truncated for simplicity):
...
...
Deployment successful.
Remote build succeeded!
Syncing triggers...
Functions in msdocs-azurefunctions-qs:
HttpExample - [httpTrigger]
Invoke url: https://msdocs-azurefunctions-qs.azurewebsites.net/api/httpexample
Copy the complete Invoke URL shown in the output of the publish command into a browser address bar,
appending the query parameter ?name=Functions . The browser should display similar output as when you ran
the function locally.
Run the following command to view near real-time streaming logs:
In a separate terminal window or in the browser, call the remote function again. A verbose log of the function
execution in Azure is shown in the terminal.
Clean up resources
If you continue to the next step and add an Azure Storage queue output binding, keep all your resources in place
as you'll build on what you've already done.
Otherwise, use the following command to delete the resource group and all its contained resources to avoid
incurring further costs.
Azure CLI
Azure PowerShell
Next steps
Connect to an Azure Storage queue
Quickstart: Create a PowerShell function in Azure
from the command line
8/2/2022 • 8 minutes to read • Edit Online
In this article, you use command-line tools to create a PowerShell function that responds to HTTP requests. After
testing the code locally, you deploy it to the serverless environment of Azure Functions.
Completing this quickstart incurs a small cost of a few USD cents or less in your Azure account.
There is also a Visual Studio Code-based version of this article.
In a terminal or command window, run func --version to check that the Azure Functions Core Tools are
version 4.x.
Run az --version to check that the Azure CLI version is 2.4 or later.
Run az login to sign in to Azure and verify an active subscription.
This folder contains various files for the project, including configurations files named local.settings.json
and host.json. Because local.settings.json can contain secrets downloaded from Azure, the file is excluded
from source control by default in the .gitignore file.
3. Add a function to your project by using the following command, where the --name argument is the
unique name of your function (HttpExample) and the --template argument specifies the function's
trigger (HTTP).
func new creates a subfolder matching the function name that contains a code file appropriate to the
project's chosen language and a configuration file named function.json.
(Optional) Examine the file contents
If desired, you can skip to Run the function locally and examine the file contents later.
run.ps1
run.ps1 defines a function script that's triggered according to the configuration in function.json.
$body = "This HTTP triggered function executed successfully. Pass a name in the query string or in the
request body for a personalized response."
if ($name) {
$body = "Hello, $name. This HTTP triggered function executed successfully."
}
For an HTTP trigger, the function receives request data passed to the $Request param defined in function.json.
The return object, defined as Response in function.json, is passed to the Push-OutputBinding cmdlet as the
response.
function.json
function.json is a configuration file that defines the input and output bindings for the function, including the
trigger type.
{
"bindings": [
{
"authLevel": "function",
"type": "httpTrigger",
"direction": "in",
"name": "Request",
"methods": [
"get",
"post"
]
},
{
"type": "http",
"direction": "out",
"name": "Response"
}
]
}
Each binding requires a direction, a type, and a unique name. The HTTP trigger has an input binding of type
httpTrigger and output binding of type http .
func start
Toward the end of the output, the following lines must appear:
...
Http Functions:
NOTE
If HttpExample doesn't appear as shown above, you likely started the host from outside the root folder of the
project. In that case, use Ctrl+C to stop the host, go to the project's root folder, and run the previous command
again.
2. Copy the URL of your HttpExample function from this output to a browser and append the query string
?name=<YOUR_NAME> , making the full URL like http://localhost:7071/api/HttpExample?name=Functions . The
browser should display a response message that echoes back your query string value. The terminal in
which you started your project also shows log output as you make requests.
3. When you're done, press Ctrl + C and type y to stop the functions host.
Create supporting Azure resources for your function
Before you can deploy your function code to Azure, you need to create three resources:
A resource group, which is a logical container for related resources.
A Storage account, which is used to maintain state and other information about your functions.
A function app, which provides the environment for executing your function code. A function app maps to
your local function project and lets you group functions as a logical unit for easier management, deployment,
and sharing of resources.
Use the following commands to create these items. Both Azure CLI and PowerShell are supported.
1. If you haven't done so already, sign in to Azure:
Azure CLI
Azure PowerShell
az login
Azure CLI
Azure PowerShell
The az group create command creates a resource group. In the above command, replace <REGION> with a
region near you, using an available region code returned from the az account list-locations command.
3. Create a general-purpose storage account in your resource group and region:
Azure CLI
Azure PowerShell
Azure CLI
Azure PowerShell
az functionapp create --resource-group AzureFunctionsQuickstart-rg --consumption-plan-location
<REGION> --runtime powershell --functions-version 3 --name <APP_NAME> --storage-account
<STORAGE_NAME>
The publish command shows results similar to the following output (truncated for simplicity):
...
...
Deployment successful.
Remote build succeeded!
Syncing triggers...
Functions in msdocs-azurefunctions-qs:
HttpExample - [httpTrigger]
Invoke url: https://msdocs-azurefunctions-qs.azurewebsites.net/api/httpexample
Browser
curl
Copy the complete Invoke URL shown in the output of the publish command into a browser address bar,
appending the query parameter ?name=Functions . The browser should display similar output as when you ran
the function locally.
Run the following command to view near real-time streaming logs:
In a separate terminal window or in the browser, call the remote function again. A verbose log of the function
execution in Azure is shown in the terminal.
Clean up resources
If you continue to the next step and add an Azure Storage queue output binding, keep all your resources in place
as you'll build on what you've already done.
Otherwise, use the following command to delete the resource group and all its contained resources to avoid
incurring further costs.
Azure CLI
Azure PowerShell
Next steps
Connect to an Azure Storage queue
Quickstart: Create a Python function in Azure from
the command line
8/2/2022 • 9 minutes to read • Edit Online
In this article, you use command-line tools to create a Python function that responds to HTTP requests. After
testing the code locally, you deploy it to the serverless environment of Azure Functions.
Completing this quickstart incurs a small cost of a few USD cents or less in your Azure account.
There's also a Visual Studio Code-based version of this article.
In a terminal or command window, run func --version to check that the Azure Functions Core Tools
version is 4.x.
Run az --version to check that the Azure CLI version is 2.4 or later.
Run az login to sign in to Azure and verify an active subscription.
Run python --version (Linux/macOS) or py --version (Windows) to check your Python version reports
3.9.x, 3.8.x, or 3.7.x.
source .venv/bin/activate
If Python didn't install the venv package on your Linux distribution, run the following command:
cd LocalFunctionProj
This folder contains various files for the project, including configuration files named
[local.settings.json](functions-develop-local.md#local-settings-file) and [host.json](functions-host-
json.md). Because local.settings.json can contain secrets downloaded from Azure, the file is excluded from
source control by default in the .gitignore file.
3. Add a function to your project by using the following command, where the --name argument is the
unique name of your function (HttpExample) and the --template argument specifies the function's
trigger (HTTP).
func new creates a subfolder matching the function name that contains a code file appropriate to the
project's chosen language and a configuration file named function.json.
Get the list of templates by using the following command:
name = req.params.get('name')
if not name:
try:
req_body = req.get_json()
except ValueError:
pass
else:
name = req_body.get('name')
if name:
return func.HttpResponse(f"Hello, {name}. This HTTP triggered function executed successfully.")
else:
return func.HttpResponse(
"This HTTP triggered function executed successfully. Pass a name in the query string or in the
request body for a personalized response.",
status_code=200
)
For an HTTP trigger, the function receives request data in the variable req as defined in function.json. req is an
instance of the azure.functions.HttpRequest class. The return object, defined as $return in function.json, is an
instance of azure.functions.HttpResponse class. For more information, see Azure Functions HTTP triggers and
bindings.
function.json
function.json is a configuration file that defines the input and output bindings for the function, including the
trigger type.
If desired, you can change scriptFile to invoke a different Python file.
{
"scriptFile": "__init__.py",
"bindings": [
{
"authLevel": "function",
"type": "httpTrigger",
"direction": "in",
"name": "req",
"methods": [
"get",
"post"
]
},
{
"type": "http",
"direction": "out",
"name": "$return"
}
]
}
Each binding requires a direction, a type, and a unique name. The HTTP trigger has an input binding of type
httpTrigger and output binding of type http .
Run the function locally
1. Run your function by starting the local Azure Functions runtime host from the LocalFunctionProj folder.
func start
Toward the end of the output, the following lines must appear:
...
Http Functions:
NOTE
If HttpExample doesn't appear as shown above, you likely started the host from outside the root folder of the
project. In that case, use Ctrl+C to stop the host, go to the project's root folder, and run the previous command
again.
2. Copy the URL of your HttpExample function from this output to a browser and append the query string
?name=<YOUR_NAME> , making the full URL like http://localhost:7071/api/HttpExample?name=Functions . The
browser should display a response message that echoes back your query string value. The terminal in
which you started your project also shows log output as you make requests.
3. When you're done, press Ctrl + C and type y to stop the functions host.
az login
az config param-persist on
Azure CLI
Azure PowerShell
The az group create command creates a resource group. In the above command, replace <REGION> with a
region near you, using an available region code returned from the az account list-locations command.
NOTE
You can't host Linux and Windows apps in the same resource group. If you have an existing resource group
named AzureFunctionsQuickstart-rg with a Windows function app or web app, you must use a different
resource group.
Azure CLI
Azure PowerShell
Azure CLI
Azure PowerShell
The az functionapp create command creates the function app in Azure. If you're using Python 3.8, 3.7, or
3.6, change --runtime-version to 3.8 , 3.7 , or 3.6 , respectively. You must supply --os-type linux
because Python functions can't run on Windows, which is the default.
In the previous example, replace <APP_NAME> with a globally unique name appropriate to you. The
<APP_NAME> is also the default DNS domain for the function app.
This command creates a function app running in your specified language runtime under the Azure
Functions Consumption Plan, which is free for the amount of usage you incur here. The command also
provisions an associated Azure Application Insights instance in the same resource group, with which you
can monitor your function app and view logs. For more information, see Monitor Azure Functions. The
instance incurs no costs until you activate it.
The publish command shows results similar to the following output (truncated for simplicity):
...
...
Deployment successful.
Remote build succeeded!
Syncing triggers...
Functions in msdocs-azurefunctions-qs:
HttpExample - [httpTrigger]
Invoke url: https://msdocs-azurefunctions-qs.azurewebsites.net/api/httpexample
Browser
curl
Copy the complete Invoke URL shown in the output of the publish command into a browser address bar,
appending the query parameter ?name=Functions . The browser should display similar output as when you ran
the function locally.
Run the following command to view near real-time streaming logs in Application Insights in the Azure portal.
In a separate terminal window or in the browser, call the remote function again. A verbose log of the function
execution in Azure is shown in the terminal.
Clean up resources
If you continue to the next step and add an Azure Storage queue output binding, keep all your resources in place
as you'll build on what you've already done.
Otherwise, use the following command to delete the resource group and all its contained resources to avoid
incurring further costs.
Azure CLI
Azure PowerShell
Next steps
Connect to an Azure Storage queue
Having issues? Let us know.
Quickstart: Create a TypeScript function in Azure
from the command line
8/2/2022 • 8 minutes to read • Edit Online
In this article, you use command-line tools to create a TypeScript function that responds to HTTP requests. After
testing the code locally, you deploy it to the serverless environment of Azure Functions.
Completing this quickstart incurs a small cost of a few USD cents or less in your Azure account.
There is also a Visual Studio Code-based version of this article.
In a terminal or command window, run func --version to check that the Azure Functions Core Tools are
version 4.x.
Run az --version to check that the Azure CLI version is 2.4 or later.
Run az login to sign in to Azure and verify an active subscription.
This folder contains various files for the project, including configurations files named local.settings.json
and host.json. Because local.settings.json can contain secrets downloaded from Azure, the file is excluded
from source control by default in the .gitignore file.
3. Add a function to your project by using the following command, where the --name argument is the
unique name of your function (HttpExample) and the --template argument specifies the function's
trigger (HTTP).
func new creates a subfolder matching the function name that contains a code file appropriate to the
project's chosen language and a configuration file named function.json.
(Optional) Examine the file contents
If desired, you can skip to Run the function locally and examine the file contents later.
index.ts
index.ts exports a function that's triggered according to the configuration in function.json.
const httpTrigger: AzureFunction = async function (context: Context, req: HttpRequest): Promise<void> {
context.log('HTTP trigger function processed a request.');
const name = (req.query.name || (req.body && req.body.name));
const responseMessage = name
? "Hello, " + name + ". This HTTP triggered function executed successfully."
: "This HTTP triggered function executed successfully. Pass a name in the query string or in the
request body for a personalized response.";
context.res = {
// status: 200, /* Defaults to 200 */
body: responseMessage
};
};
For an HTTP trigger, the function receives request data in the variable req of type HttpRequest as defined in
function.json. The return object, defined as $return in function.json, is the response.
function.json
function.json is a configuration file that defines the input and output bindings for the function, including the
trigger type.
{
"bindings": [
{
"authLevel": "function",
"type": "httpTrigger",
"direction": "in",
"name": "req",
"methods": [
"get",
"post"
]
},
{
"type": "http",
"direction": "out",
"name": "res"
}
]
}
Each binding requires a direction, a type, and a unique name. The HTTP trigger has an input binding of type
httpTrigger and output binding of type http .
npm install
npm start
Toward the end of the output, the following lines should appear:
...
Http Functions:
NOTE
If HttpExample doesn't appear as shown below, you likely started the host from outside the root folder of the
project. In that case, use Ctrl+C to stop the host, navigate to the project's root folder, and run the previous
command again.
2. Copy the URL of your HttpExample function from this output to a browser and append the query string
?name=<your-name> , making the full URL like http://localhost:7071/api/HttpExample?name=Functions . The
browser should display a message like Hello Functions :
The terminal in which you started your project also shows log output as you make requests.
3. When you're ready, use Ctrl +C and choose y to stop the functions host.
Azure CLI
Azure PowerShell
az login
Azure CLI
Azure PowerShell
The az group create command creates a resource group. In the above command, replace <REGION> with a
region near you, using an available region code returned from the az account list-locations command.
3. Create a general-purpose storage account in your resource group and region:
Azure CLI
Azure PowerShell
Azure CLI
Azure PowerShell
The az functionapp create command creates the function app in Azure. If you're using Node.js 16, also
change --runtime-version to 16 .
In the previous example, replace <STORAGE_NAME> with the name of the account you used in the previous
step, and replace <APP_NAME> with a globally unique name appropriate to you. The <APP_NAME> is also the
default DNS domain for the function app.
This command creates a function app running in your specified language runtime under the Azure
Functions Consumption Plan, which is free for the amount of usage you incur here. The command also
provisions an associated Azure Application Insights instance in the same resource group, with which you
can monitor your function app and view logs. For more information, see Monitor Azure Functions. The
instance incurs no costs until you activate it.
2. With the necessary resources in place, you're now ready to deploy your local functions project to the
function app in Azure by using the func azure functionapp publish command. In the following example,
replace <APP_NAME> with the name of your app.
If you see the error, "Can't find app with name ...", wait a few seconds and try again, as Azure may not
have fully initialized the app after the previous az functionapp create command.
The publish command shows results similar to the following output (truncated for simplicity):
...
...
Deployment successful.
Remote build succeeded!
Syncing triggers...
Functions in msdocs-azurefunctions-qs:
HttpExample - [httpTrigger]
Invoke url: https://msdocs-azurefunctions-qs.azurewebsites.net/api/httpexample?
code=KYHrydo4GFe9y0000000qRgRJ8NdLFKpkakGJQfC3izYVidzzDN4gQ==
Browser
curl
Copy the complete Invoke URL shown in the output of the publish command into a browser address bar,
appending the query parameter ?name=Functions . The browser should display similar output as when you ran
the function locally.
In a separate terminal window or in the browser, call the remote function again. A verbose log of the function
execution in Azure is shown in the terminal.
Clean up resources
If you continue to the next step and add an Azure Storage queue output binding, keep all your resources in place
as you'll build on what you've already done.
Otherwise, use the following command to delete the resource group and all its contained resources to avoid
incurring further costs.
Azure CLI
Azure PowerShell
az group delete --name AzureFunctionsQuickstart-rg
Next steps
Connect to an Azure Storage queue
Quickstart: Create and deploy Azure Functions
resources using Bicep
8/2/2022 • 3 minutes to read • Edit Online
In this article, you use Azure Functions with Bicep to create a function app and related resources in Azure. The
function app provides an execution context for your function code executions.
Completing this quickstart incurs a small cost of a few USD cents or less in your Azure account.
Bicep is a domain-specific language (DSL) that uses declarative syntax to deploy Azure resources. It provides
concise syntax, reliable type safety, and support for code reuse. Bicep offers the best authoring experience for
your infrastructure-as-code solutions in Azure.
After you create the function app, you can deploy Azure Functions project code to that app.
Prerequisites
Azure account
Before you begin, you must have an Azure account with an active subscription. Create an account for free.
The following four Azure resources are created by this Bicep file:
Microsoft.Storage/storageAccounts : create an Azure Storage account, which is required by Functions.
Microsoft.Web/ser verfarms : create a serverless Consumption hosting plan for the function app.
Microsoft.Web/sites : create a function app.
microsoft.insights/components : create an Application Insights instance for monitoring.
CLI
PowerShell
NOTE
Replace <app-location> with the region for Application Insights, which is usually the same as the resource
group.
When the deployment finishes, you should see a message indicating the deployment succeeded.
CLI
PowerShell
Clean up resources
If you continue to the next step and add an Azure Storage queue output binding, keep all your resources in place
as you'll build on what you've already done.
Otherwise, if you no longer need the resources, use Azure CLI, PowerShell, or Azure portal to delete the resource
group and its resources.
CLI
PowerShell
Next steps
Now that you've created your function app resources in Azure, you can deploy your code to the existing app by
using one of the following tools:
Visual Studio Code
Visual Studio
Azure Functions Core Tools
Quickstart: Create and deploy Azure Functions
resources from an ARM template
8/2/2022 • 4 minutes to read • Edit Online
In this article, you use Azure Functions with an Azure Resource Manager template (ARM template) to create a
function app and related resources in Azure. The function app provides an execution context for your function
code executions.
Completing this quickstart incurs a small cost of a few USD cents or less in your Azure account.
An ARM template is a JavaScript Object Notation (JSON) file that defines the infrastructure and configuration for
your project. The template uses declarative syntax. In declarative syntax, you describe your intended deployment
without writing the sequence of programming commands to create the deployment.
If your environment meets the prerequisites and you're familiar with using ARM templates, select the Deploy to
Azure button. The template will open in the Azure portal.
After you create the function app, you can deploy Azure Functions project code to that app.
Prerequisites
Azure account
Before you begin, you must have an Azure account with an active subscription. Create an account for free.
{
"$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
"contentVersion": "1.0.0.0",
"metadata": {
"_generator": {
"name": "bicep",
"version": "0.5.6.12127",
"templateHash": "10848576439099634716"
}
},
"parameters": {
"appName": {
"type": "string",
"defaultValue": "[format('fnapp{0}', uniqueString(resourceGroup().id))]",
"metadata": {
"description": "The name of the function app that you wish to create."
}
},
"storageAccountType": {
"type": "string",
"defaultValue": "Standard_LRS",
"allowedValues": [
"Standard_LRS",
"Standard_GRS",
"Standard_RAGRS"
],
],
"metadata": {
"description": "Storage Account type"
}
},
"location": {
"type": "string",
"defaultValue": "[resourceGroup().location]",
"metadata": {
"description": "Location for all resources."
}
},
"appInsightsLocation": {
"type": "string",
"metadata": {
"description": "Location for Application Insights"
}
},
"runtime": {
"type": "string",
"defaultValue": "node",
"allowedValues": [
"node",
"dotnet",
"java"
],
"metadata": {
"description": "The language worker runtime to load in the function app."
}
}
},
"variables": {
"functionAppName": "[parameters('appName')]",
"hostingPlanName": "[parameters('appName')]",
"applicationInsightsName": "[parameters('appName')]",
"storageAccountName": "[format('{0}azfunctions', uniqueString(resourceGroup().id))]",
"functionWorkerRuntime": "[parameters('runtime')]"
},
"resources": [
{
"type": "Microsoft.Storage/storageAccounts",
"apiVersion": "2021-08-01",
"name": "[variables('storageAccountName')]",
"location": "[parameters('location')]",
"sku": {
"name": "[parameters('storageAccountType')]"
},
"kind": "Storage"
},
{
"type": "Microsoft.Web/serverfarms",
"apiVersion": "2021-03-01",
"name": "[variables('hostingPlanName')]",
"location": "[parameters('location')]",
"sku": {
"name": "Y1",
"tier": "Dynamic"
},
"properties": {}
},
{
"type": "Microsoft.Web/sites",
"apiVersion": "2021-03-01",
"name": "[variables('functionAppName')]",
"location": "[parameters('location')]",
"kind": "functionapp",
"identity": {
"type": "SystemAssigned"
},
"properties": {
"properties": {
"serverFarmId": "[resourceId('Microsoft.Web/serverfarms', variables('hostingPlanName'))]",
"siteConfig": {
"appSettings": [
{
"name": "AzureWebJobsStorage",
"value": "[format('DefaultEndpointsProtocol=https;AccountName={0};EndpointSuffix=
{1};AccountKey={2}', variables('storageAccountName'), environment().suffixes.storage,
listKeys(resourceId('Microsoft.Storage/storageAccounts', variables('storageAccountName')), '2021-08-
01').keys[0].value)]"
},
{
"name": "WEBSITE_CONTENTAZUREFILECONNECTIONSTRING",
"value": "[format('DefaultEndpointsProtocol=https;AccountName={0};EndpointSuffix=
{1};AccountKey={2}', variables('storageAccountName'), environment().suffixes.storage,
listKeys(resourceId('Microsoft.Storage/storageAccounts', variables('storageAccountName')), '2021-08-
01').keys[0].value)]"
},
{
"name": "WEBSITE_CONTENTSHARE",
"value": "[toLower(variables('functionAppName'))]"
},
{
"name": "FUNCTIONS_EXTENSION_VERSION",
"value": "~2"
},
{
"name": "WEBSITE_NODE_DEFAULT_VERSION",
"value": "~10"
},
{
"name": "APPINSIGHTS_INSTRUMENTATIONKEY",
"value": "[reference(resourceId('Microsoft.Insights/components',
variables('applicationInsightsName'))).InstrumentationKey]"
},
{
"name": "FUNCTIONS_WORKER_RUNTIME",
"value": "[variables('functionWorkerRuntime')]"
}
],
"ftpsState": "FtpsOnly",
"minTlsVersion": "1.2"
},
"httpsOnly": true
},
"dependsOn": [
"[resourceId('Microsoft.Insights/components', variables('applicationInsightsName'))]",
"[resourceId('Microsoft.Web/serverfarms', variables('hostingPlanName'))]",
"[resourceId('Microsoft.Storage/storageAccounts', variables('storageAccountName'))]"
]
},
{
"type": "Microsoft.Insights/components",
"apiVersion": "2020-02-02",
"name": "[variables('applicationInsightsName')]",
"location": "[parameters('appInsightsLocation')]",
"kind": "web",
"properties": {
"Application_Type": "web",
"Request_Source": "rest"
}
}
]
}
Azure CLI
Azure PowerShell
read -p "Enter a resource group name that is used for generating resource names:" resourceGroupName &&
read -p "Enter the location (like 'eastus' or 'northeurope'):" location &&
templateUri="https://raw.githubusercontent.com/Azure/azure-quickstart-
templates/master/quickstarts/microsoft.web/function-app-create-dynamic/azuredeploy.json" &&
az group create --name $resourceGroupName --location "$location" &&
az deployment group create --resource-group $resourceGroupName --template-uri $templateUri &&
echo "Press [ENTER] to continue ..." &&
read
Clean up resources
If you continue to the next step and add an Azure Storage queue output binding, keep all your resources in place
as you'll build on what you've already done.
Otherwise, use the following command to delete the resource group and all its contained resources to avoid
incurring further costs.
Azure CLI
Azure PowerShell
Next steps
Now that you've created your function app resources in Azure, you can deploy your code to the existing app by
using one of the following tools:
Visual Studio Code
Visual Studio
Azure Functions Core Tools
Create your first function on Azure Arc (preview)
8/2/2022 • 6 minutes to read • Edit Online
In this quickstart, you create an Azure Functions project and deploy it to a function app running on an Azure
Arc-enabled Kubernetes cluster. To learn more, see App Service, Functions, and Logic Apps on Azure Arc. This
scenario only supports function apps running on Linux.
NOTE
Support for running functions on an Azure Arc-enabled Kubernetes cluster is currently in preview.
Publishing PowerShell function projects to Azure Arc-enabled Kubernetes clusters isn't currently supported. If you need to
deploy PowerShell functions to Azure Arc-enabled Kubernetes clusters, create your function app in a container.
Prerequisites
On your local computer:
C#
JavaScript
Python
NOTE
When you create the environment, make sure to make note of both the custom location name and the name of the
resource group that contains the custom location. You can use these to find the custom location ID, which you'll need
when creating your function app in the environment.
If you didn't create the environment, check with your cluster administrator.
Because these CLI commands are not yet part of the core CLI set, add them with the following commands:
cd LocalFunctionProj
This folder contains various files for the project, including configurations files named local.settings.json
and host.json. By default, the local.settings.json file is excluded from source control in the .gitignore file.
This exclusion is because the file can contain secrets that are downloaded from Azure.
3. Add a function to your project by using the following command, where the --name argument is the
unique name of your function (HttpExample) and the --template argument specifies the function's
trigger (HTTP).
func start
Toward the end of the output, the following lines must appear:
...
Http Functions:
2. Copy the URL of your HttpExample function from this output to a browser and append the query string
?name=<YOUR_NAME> , making the full URL like http://localhost:7071/api/HttpExample?name=Functions . The
browser should display a response message that echoes back your query string value. The terminal in
which you started your project also shows log output as you make requests.
3. When you're done, press Ctrl + C and type y to stop the functions host.
customLocationGroup="<resource-group-containing-custom-location>"
customLocationName="<name-of-custom-location>"
NOTE
Function apps run in an App Service Kubernetes environment on a Dedicated (App Service) plan. When you create your
function app without an existing plan, the correct plan is created for you.
az storage account create --name <STORAGE_NAME> --location westeurope --resource-group myResourceGroup --sku
Standard_LRS
NOTE
A storage account is currently required by Azure Functions tooling.
In the previous example, replace <STORAGE_NAME> with a name that is appropriate to you and unique in Azure
Storage. Names must contain three to 24 characters numbers and lowercase letters only. Standard_LRS specifies
a general-purpose account, which is supported by Functions. The --location value is a standard Azure region.
Create the function app
Run the az functionapp create command to create a new function app in the environment.
C#
JavaScript
Python
In this example, replace <CUSTOM_LOCATION_ID> with the ID of the custom location you determined for the App
Service Kubernetes environment. Also, replace <STORAGE_NAME> with the name of the account you used in the
previous step, and replace <APP_NAME> with a globally unique name appropriate to you.
The publish command shows results similar to the following output (truncated for simplicity):
...
...
Deployment successful.
Remote build succeeded!
Syncing triggers...
Functions in msdocs-azurefunctions-qs:
HttpExample - [httpTrigger]
Invoke url: https://msdocs-azurefunctions-qs.azurewebsites.net/api/httpexample
Because it can take some time for a full deployment to complete on an Azure Arc-enabled Kubernetes cluster,
you may want to re-run the following command to verify your published functions:
Browser
curl
Copy the complete Invoke URL shown in the output of the publish command into a browser address bar,
appending the query parameter ?name=Functions . The browser should display similar output as when you ran
the function locally.
Next steps
Now that you have your function app running in a container an Azure Arc-enabled App Service Kubernetes
environment, you can connect it to Azure Storage by adding a Queue Storage output binding.
C#
JavaScript
Python
In this quickstart, you create an Azure Functions project running in a custom container and deploy it to an Azure
Arc-enabled Kubernetes cluster from your Docker Hub account. To learn more, see App Service, Functions, and
Logic Apps on Azure Arc. This scenario only supports function apps running on Linux.
NOTE
Support for running functions on an Azure Arc-enabled Kubernetes cluster is currently in preview.
Prerequisites
On your local computer:
C#
JavaScript
Python
NOTE
When you create the environment, make sure to make note of both the custom location name and the name of the
resource group that contains the custom location. You can use these to find the custom location ID, which you'll need
when creating your function app in the environment.
If you didn't create the environment, check with your cluster administrator.
Because these CLI commands are not yet part of the core CLI set, add them with the following commands:
az extension add --upgrade --yes --name customlocation
az extension remove --name appservice-kube
az extension add --upgrade --yes --name appservice-kube
The --docker option generates a Dockerfile for the project, which defines a suitable custom container
for use with Azure Functions and the selected runtime.
2. Navigate into the project folder:
cd LocalFunctionProj
This folder contains the Dockerfile and other files for the project, including configurations files named
local.settings.json and host.json. By default, the local.settings.json file is excluded from source control in
the .gitignore file. This exclusion is because the file can contain secrets that are downloaded from Azure.
3. Open the generated Dockerfile and locate the 3.0 tag for the base image. If there's a 3.0 tag, replace
it with a 3.0.15885 tag. For example, in a JavaScript application, the Docker file should be modified to
have FROM mcr.microsoft.com/azure-functions/node:3.0.15885 . This version of the base image supports
deployment to an Azure Arc-enabled Kubernetes cluster.
4. Add a function to your project by using the following command, where the --name argument is the
unique name of your function (HttpExample) and the --template argument specifies the function's
trigger (HTTP).
In this example, replace <DOCKER_ID> with your Docker Hub account ID. When the command completes,
you can run the new container locally.
2. To test the build, run the image in a local container using the docker run command, with the adding the
ports argument, -p 8080:80 .
Again, replace <DOCKER_ID with your Docker ID and adding the ports argument, -p 8080:80
docker login
2. After you've signed in, push the image to Docker Hub by using the docker push command, again
replacing <docker_id> with your Docker ID.
3. Depending on your network speed, pushing the image the first time might take a few minutes (pushing
subsequent changes is much faster). While you're waiting, you can proceed to the next section and create
Azure resources in another terminal.
customLocationGroup="<resource-group-containing-custom-location>"
customLocationName="<name-of-custom-location>"
NOTE
Function apps run in an App Service Kubernetes environment on a Dedicated (App Service) plan. When you create your
function app without an existing plan, a plan is created for you.
az storage account create --name <STORAGE_NAME> --location westeurope --resource-group myResourceGroup --sku
Standard_LRS
NOTE
A storage account is currently required by Azure Functions tooling.
In the previous example, replace <STORAGE_NAME> with a name that is appropriate to you and unique in Azure
Storage. Names must contain three to 24 characters numbers and lowercase letters only. Standard_LRS specifies
a general-purpose account, which is supported by Functions. The --location value is a standard Azure region.
Create the function app
Run the az functionapp create command to create a new function app in the environment.
C#
JavaScript
Python
In this example, replace <CUSTOM_LOCATION_ID> with the ID of the custom location you determined for the App
Service Kubernetes environment. Also, replace <STORAGE_NAME> with the name of the account you used in the
previous step, <APP_NAME> with a globally unique name appropriate to you, and <DOCKER_ID> with your Docker
Hub ID.
The deployment-container-image-name parameter specifies the image to use for the function app. You can use
the az functionapp config container show command to view information about the image used for deployment.
You can also use the az functionapp config container set command to deploy from a different image.
When you first create the function app, it pulls the initial image from your Docker Hub. You can also Enable
continuous deployment to Azure from Docker Hub.
To learn how to enable SSH in the image, see Enable SSH connections.
Set required app settings
Run the following commands to create an app setting for the storage account connection string:
This code must be run either in Cloud Shell or in Bash on your local computer. Replace <STORAGE_NAME> with the
name of the storage account and <APP_NAME> with the function app name.
Copy the complete Invoke URL shown in the output of the publish command into a browser address bar,
appending the query parameter ?name=Functions . The browser should display similar output as when you ran
the function locally.
Next steps
Now that you have your function app running in a container an Azure Arc-enabled App Service Kubernetes
environment, you can connect it to Azure Storage by adding a Queue Storage output binding.
C#
JavaScript
Python
Azure Functions lets you connect Azure services and other resources to functions without having to write your
own integration code. These bindings, which represent both input and output, are declared within the function
definition. Data from bindings is provided to the function as parameters. A trigger is a special type of input
binding. Although a function has only one trigger, it can have multiple input and output bindings. To learn more,
see Azure Functions triggers and bindings concepts.
This article shows you how to use Visual Studio Code to connect Azure Cosmos DB to the function you created
in the previous quickstart article. The output binding that you add to this function writes data from the HTTP
request to a JSON document stored in an Azure Cosmos DB container.
Before you begin, you must complete the quickstart: Create a C# function in Azure using Visual Studio Code. If
you already cleaned up resources at the end of that article, go through the steps again to recreate the function
app and related resources in Azure.
Before you begin, you must complete the quickstart: Create a JavaScript function in Azure using Visual Studio
Code. If you already cleaned up resources at the end of that article, go through the steps again to recreate the
function app and related resources in Azure.
1. In Visual Studio Code, choose the Azure icon in the Activity bar.
2. In the Azure: Databases area, right-click (Ctrl+click on macOS) on the Azure subscription where you
created your function app in the previous article, and select Create Ser ver...
Select an Azure Database Ser ver Choose Core (SQL) to create a document database
that you can query by using a SQL syntax. Learn more
about the Azure Cosmos DB SQL API.
Select a resource group for new resources Choose the resource group where you created your
function app in the previous article.
Select a location for new resources Select a geographic location to host your Azure Cosmos
DB account. Use the location that's closest to you or
your users to get the fastest access to your data.
P RO M P T SEL EC T IO N
Enter the par tition key for the collection Type /id as the partition key.
3. Choose the function app you created in the previous article. Provide the following information at the
prompts:
P RO M P T SEL EC T IO N
Enter value for "CosmosDbConnectionString" Paste the connection string of your Azure Cosmos DB
account you just copied.
This creates a application setting named connection CosmosDbConnectionString in your function app in
Azure. Now, you can download this setting to your local.settings.json file.
4. Press F1 again to open the command palette, then search for and run the command
Azure Functions: Download Remote Settings... .
5. Choose the function app you created in the previous article. Select Yes to all to overwrite the existing
local settings.
This downloads all of the setting from Azure to your local project, including the new connection string setting.
Most of the downloaded settings aren't used when running locally.
Your project has been configured to use extension bundles, which automatically installs a predefined set of
extension packages.
Extension bundles usage is enabled in the host.json file at the root of the project, which appears as follows:
{
"version": "2.0",
"extensionBundle": {
"id": "Microsoft.Azure.Functions.ExtensionBundle",
"version": "[1.*, 2.0.0)"
}
}
Now, you can add the Azure Cosmos DB output binding to your project.
Open the HttpExample.cs project file and add the following parameter to the Run method definition:
The documentsOut parameter is an IAsyncCollector<T> type, which represents a collection of JSON documents
that are written to your Azure Cosmos DB container when the function completes. Specific attributes indicate the
names of the container and its parent database. The connection string for your Azure Cosmos DB account is set
by the ConnectionStringSettingAttribute .
Specific attributes specify the name of the container and the name of its parent database. The connection string
for your Azure Cosmos DB account is set by the CosmosDbConnectionString .
Binding attributes are defined directly in the function.json file. Depending on the binding type, additional
properties may be required. The Azure Cosmos DB output configuration describes the fields required for an
Azure Cosmos DB output binding. The extension makes it easy to add bindings to the function.json file.
To create a binding, right-click (Ctrl+click on macOS) the function.json file in your HttpTrigger folder and
choose Add binding.... Follow the prompts to define the following binding properties for the new binding:
P RO M P T VA L UE DESC RIP T IO N
Select binding with direction Azure Cosmos DB The binding is an Azure Cosmos DB
"out" binding.
The name used to identify this outputDocument Name that identifies the binding
binding in your code parameter referenced in your code.
P RO M P T VA L UE DESC RIP T IO N
The Cosmos DB database where my-database The name of the Azure Cosmos DB
data will be written database containing the target
container.
Database collection where data my-container The name of the Azure Cosmos DB
will be written container where the JSON documents
will be written.
If true, creates the Cosmos DB false The target database and container
database and collection already exist.
Par tition key (optional) leave blank Only required when the output
binding creates the container.
Collection throughput (optional) leave blank Only required when the output
binding creates the container.
A binding is added to the bindings array in your function.json, which should look like the following after
removing any undefined values present
{
"type": "cosmosDB",
"direction": "out",
"name": "outputDocument",
"databaseName": "my-database",
"collectionName": "my-container",
"createIfNotExists": "false",
"connectionStringSetting": "CosmosDbConnectionString"
}
Add code that uses the documentsOut output binding object to create a JSON document. Add this code before
the method returns.
if (!string.IsNullOrEmpty(name))
{
// Add a JSON document to the output container.
await documentsOut.AddAsync(new
{
// create a random ID
id = System.Guid.NewGuid().ToString(),
name = name
});
}
if (!string.IsNullOrEmpty(name))
{
// Add a JSON document to the output container.
await documentsOut.AddAsync(new
{
// create a random ID
id = System.Guid.NewGuid().ToString(),
name = name
});
}
Add code that uses the outputDocument output binding object on context.bindings to create a JSON document.
Add this code before the context.res statement.
if (name) {
context.bindings.outputDocument = JSON.stringify({
// create a random ID
id: new Date().toISOString() + Math.random().toString().substring(2, 10),
name: name
});
}
if (name) {
context.bindings.outputDocument = JSON.stringify({
// create a random ID
id: new Date().toISOString() + Math.random().toString().substring(2, 10),
name: name
});
}
context.res = {
// status: 200, /* Defaults to 200 */
body: responseMessage
};
}
This code now returns a MultiResponse object that contains both a document and an HTTP response.
If you have trouble running on Windows, make sure that the default terminal for Visual Studio Code isn't
set to WSL Bash .
2. With the Core Tools running, go to the Azure: Functions area. Under Functions , expand Local Project
> Functions . Right-click (Windows) or Ctrl - click (macOS) the HttpExample function and choose
Execute Function Now....
3. In the Enter request body , press Enter to send a request message to your function.
4. When the function executes locally and returns a response, a notification is raised in Visual Studio Code.
Information about the function execution is shown in the Terminal panel.
5. Press Ctrl + C to stop Core Tools and disconnect the debugger.
3. In Enter request body you see the request message body value of { "name": "Azure" } . Press Enter to
send this request message to your function.
4. After a response is returned, press Ctrl + C to stop Core Tools.
Verify that a JSON document has been created
1. On the Azure portal, go back to your Azure Cosmos DB account and select Data Explorer .
2. Expand your database and container, and select Items to list the documents created in your container.
3. Verify that a new JSON document has been created by the output binding.
Clean up resources
In Azure, resources refer to function apps, functions, storage accounts, and so forth. They're grouped into
resource groups, and you can delete everything in a group by deleting the group.
You created resources to complete these quickstarts. You may be billed for these resources, depending on your
account status and service pricing. If you don't need the resources anymore, here's how to delete them:
1. In Visual Studio Code, press F1 to open the command palette. In the command palette, search for and
select Azure: Open in portal .
2. Choose your function app and press Enter. The function app page opens in the Azure portal.
3. In the Over view tab, select the named link next to Resource group .
4. On the Resource group page, review the list of included resources, and verify that they're the ones you
want to delete.
5. In the Resource group page, review the list of included resources, and verify that they are the ones you
want to delete.
6. Select Delete resource group , and follow the instructions.
Deletion may take a couple of minutes. When it's done, a notification appears for a few seconds. You can
also select the bell icon at the top of the page to view the notification.
Next steps
You've updated your HTTP triggered function to write JSON documents to an Azure Cosmos DB container. Now
you can learn more about developing Functions using Visual Studio Code:
Develop Azure Functions using Visual Studio Code
Azure Functions triggers and bindings.
Examples of complete Function projects in C#.
Azure Functions C# developer reference
Examples of complete Function projects in JavaScript.
Azure Functions JavaScript developer guide
Connect Azure Functions to Azure Storage using
Visual Studio Code
8/2/2022 • 16 minutes to read • Edit Online
Azure Functions lets you connect Azure services and other resources to functions without having to write your
own integration code. These bindings, which represent both input and output, are declared within the function
definition. Data from bindings is provided to the function as parameters. A trigger is a special type of input
binding. Although a function has only one trigger, it can have multiple input and output bindings. To learn more,
see Azure Functions triggers and bindings concepts.
In this article, you learn how to use Visual Studio Code to connect Azure Storage to the function you created in
the previous quickstart article. The output binding that you add to this function writes data from the HTTP
request to a message in an Azure Queue storage queue.
Most bindings require a stored connection string that Functions uses to access the bound service. To make it
easier, you use the storage account that you created with your function app. The connection to this account is
already stored in an app setting named AzureWebJobsStorage .
IMPORTANT
Because the local.settings.json file contains secrets, it never gets published, and is excluded from the source
control.
3. Copy the value AzureWebJobsStorage , which is the key for the storage account connection string value.
You use this connection to verify that the output binding works as expected.
{
"version": "2.0",
"extensionBundle": {
"id": "Microsoft.Azure.Functions.ExtensionBundle",
"version": "[1.*, 2.0.0)"
}
}
Now, you can add the storage output binding to your project.
Except for HTTP and timer triggers, bindings are implemented as extension packages. Run the following dotnet
add package command in the Terminal window to add the Storage extension package to your project.
In-process
Isolated process
Now, you can add the storage output binding to your project.
Select binding with direction... Azure Queue Storage The binding is an Azure Storage queue
binding.
The name used to identify this msg Name that identifies the binding
binding in your code parameter referenced in your code.
The queue to which the message outqueue The name of the queue that the
will be sent binding writes to. When the
queueName doesn't exist, the binding
creates it on first use.
A binding is added to the bindings array in your function.json, which should look like the following:
{
"type": "queue",
"direction": "out",
"name": "msg",
"queueName": "outqueue",
"connection": "AzureWebJobsStorage"
}
In a C# project, the bindings are defined as binding attributes on the function method. Specific definitions
depend on whether your app runs in-process (C# class library) or in an isolated process.
In-process
Isolated process
Open the HttpExample.cs project file and add the following parameter to the Run method definition:
The msg parameter is an ICollector<T> type, representing a collection of messages written to an output
binding when the function completes. In this case, the output is a storage queue named outqueue . The
StorageAccountAttribute sets the connection string for the storage account. This attribute indicates the setting
that contains the storage account connection string and can be applied at the class, method, or parameter level.
In this case, you could omit StorageAccountAttribute because you're already using the default storage account.
The Run method definition must now look like the following code:
[FunctionName("HttpExample")]
public static async Task<IActionResult> Run(
[HttpTrigger(AuthorizationLevel.Anonymous, "get", "post", Route = null)] HttpRequest req,
[Queue("outqueue"),StorageAccount("AzureWebJobsStorage")] ICollector<string> msg,
ILogger log)
In a Java project, the bindings are defined as binding annotations on the function method. The function.json file
is then autogenerated based on these annotations.
Browse to the location of your function code under src/main/java, open the Function.java project file, and add
the following parameter to the run method definition:
The msg parameter is an OutputBinding<T> type, which represents a collection of strings that are written as
messages to an output binding when the function completes. In this case, the output is a storage queue named
outqueue . The connection string for the Storage account is set by the connection method. Rather than the
connection string itself, you pass the application setting that contains the Storage account connection string.
The run method definition should now look like the following example:
@FunctionName("HttpExample")
public HttpResponseMessage run(
@HttpTrigger(name = "req", methods = {HttpMethod.GET, HttpMethod.POST}, authLevel =
AuthorizationLevel.ANONYMOUS)
HttpRequestMessage<Optional<String>> request,
@QueueOutput(name = "msg", queueName = "outqueue",
connection = "AzureWebJobsStorage") OutputBinding<String> msg,
final ExecutionContext context) {
Add code that uses the msg output binding object on context.bindings to create a queue message. Add this
code before the context.res statement.
context.bindings.msg = name;
const httpTrigger: AzureFunction = async function (context: Context, req: HttpRequest): Promise<void> {
context.log('HTTP trigger function processed a request.');
const name = (req.query.name || (req.body && req.body.name));
if (name) {
// Add a message to the storage queue,
// which is the name passed to the function.
context.bindings.msg = name;
// Send a "hello" response.
context.res = {
// status: 200, /* Defaults to 200 */
body: "Hello " + (req.query.name || req.body.name)
};
}
else {
context.res = {
status: 400,
body: "Please pass a name on the query string or in the request body"
};
}
};
Add code that uses the Push-OutputBinding cmdlet to write text to the queue using the msg output binding. Add
this code before you set the OK status in the if statement.
$outputMsg = $name
Push-OutputBinding -name msg -Value $outputMsg
if ($name) {
# Write the $name value to the queue,
# which is the name passed to the function.
$outputMsg = $name
Push-OutputBinding -name msg -Value $outputMsg
$status = [HttpStatusCode]::OK
$body = "Hello $name"
}
else {
$status = [HttpStatusCode]::BadRequest
$body = "Please pass a name on the query string or in the request body."
}
Update HttpExample\__init__.py to match the following code, add the msg parameter to the function definition
and msg.set(name) under the if name: statement:
import logging
name = req.params.get('name')
if not name:
try:
req_body = req.get_json()
except ValueError:
pass
else:
name = req_body.get('name')
if name:
msg.set(name)
return func.HttpResponse(f"Hello {name}!")
else:
return func.HttpResponse(
"Please pass a name on the query string or in the request body",
status_code=400
)
The msg parameter is an instance of the azure.functions.Out class . The set method writes a string message
to the queue. In this case, it's the name passed to the function in the URL query string.
In-process
Isolated process
Add code that uses the msg output binding object to create a queue message. Add this code before the method
returns.
if (!string.IsNullOrEmpty(name))
{
// Add a message to the output collection.
msg.Add(name);
}
[FunctionName("HttpExample")]
public static async Task<IActionResult> Run(
[HttpTrigger(AuthorizationLevel.Anonymous, "get", "post", Route = null)] HttpRequest req,
[Queue("outqueue"),StorageAccount("AzureWebJobsStorage")] ICollector<string> msg,
ILogger log)
{
log.LogInformation("C# HTTP trigger function processed a request.");
if (!string.IsNullOrEmpty(name))
{
// Add a message to the output collection.
msg.Add(name);
}
return name != null
? (ActionResult)new OkObjectResult($"Hello, {name}")
: new BadRequestObjectResult("Please pass a name on the query string or in the request body");
}
Now, you can use the new msg parameter to write to the output binding from your function code. Add the
following line of code before the success response to add the value of name to the msg output binding.
msg.setValue(name);
When you use an output binding, you don't have to use the Azure Storage SDK code for authentication, getting a
queue reference, or writing data. The Functions runtime and queue output binding do those tasks for you.
Your run method should now look like the following example:
@FunctionName("HttpExample")
public HttpResponseMessage run(
@HttpTrigger(name = "req", methods = {HttpMethod.GET, HttpMethod.POST}, authLevel =
AuthorizationLevel.ANONYMOUS)
HttpRequestMessage<Optional<String>> request,
@QueueOutput(name = "msg", queueName = "outqueue",
connection = "AzureWebJobsStorage") OutputBinding<String> msg,
final ExecutionContext context) {
context.getLogger().info("Java HTTP trigger processed a request.");
if (name == null) {
return request.createResponseBuilder(HttpStatus.BAD_REQUEST)
.body("Please pass a name on the query string or in the request body").build();
} else {
// Write the name to the message queue.
msg.setValue(name);
@SuppressWarnings("unchecked")
final OutputBinding<String> msg = (OutputBinding<String>)mock(OutputBinding.class);
final HttpResponseMessage ret = new Function().run(req, msg, context);
If you have trouble running on Windows, make sure that the default terminal for Visual Studio Code isn't
set to WSL Bash .
2. With the Core Tools running, go to the Azure: Functions area. Under Functions , expand Local Project
> Functions . Right-click (Windows) or Ctrl - click (macOS) the HttpExample function and choose
Execute Function Now....
3. In the Enter request body , press Enter to send a request message to your function.
4. When the function executes locally and returns a response, a notification is raised in Visual Studio Code.
Information about the function execution is shown in the Terminal panel.
5. Press Ctrl + C to stop Core Tools and disconnect the debugger.
2. In the Connect dialog, choose Add an Azure account , choose your Azure environment , and then
select Sign in....
After you successfully sign in to your account, you see all of the Azure subscriptions associated with your
account.
Examine the output queue
1. In Visual Studio Code, press F1 to open the command palette, then search for and run the command
Azure Storage: Open in Storage Explorer and choose your storage account name. Your storage account
opens in the Azure Storage Explorer.
2. Expand the Queues node, and then select the queue named outqueue .
The queue contains the message that the queue output binding created when you ran the HTTP-triggered
function. If you invoked the function with the default name value of Azure, the queue message is Name
passed to the function: Azure.
3. Run the function again, send another request, and you see a new message in the queue.
Now, it's time to republish the updated function app to Azure.
Clean up resources
In Azure, resources refer to function apps, functions, storage accounts, and so forth. They're grouped into
resource groups, and you can delete everything in a group by deleting the group.
You've created resources to complete these quickstarts. You may be billed for these resources, depending on
your account status and service pricing. If you don't need the resources anymore, here's how to delete them:
1. In Visual Studio Code, press F1 to open the command palette. In the command palette, search for and
select Azure: Open in portal .
2. Choose your function app and press Enter. The function app page opens in the Azure portal.
3. In the Over view tab, select the named link next to Resource group .
4. On the Resource group page, review the list of included resources, and verify that they're the ones you
want to delete.
5. In the Resource group page, review the list of included resources, and verify that they are the ones you
want to delete.
6. Select Delete resource group , and follow the instructions.
Deletion may take a couple of minutes. When it's done, a notification appears for a few seconds. You can
also select the bell icon at the top of the page to view the notification.
Next steps
You've updated your HTTP triggered function to write data to a Storage queue. Now you can learn more about
developing Functions using Visual Studio Code:
Develop Azure Functions using Visual Studio Code
Azure Functions triggers and bindings.
Examples of complete Function projects in C#.
Azure Functions C# developer reference
Examples of complete Function projects in JavaScript.
Azure Functions JavaScript developer guide
Examples of complete Function projects in Java.
Azure Functions Java developer guide
Examples of complete Function projects in TypeScript.
Azure Functions TypeScript developer guide
Examples of complete Function projects in Python.
Azure Functions Python developer guide
Examples of complete Function projects in PowerShell.
Azure Functions PowerShell developer guide
Connect functions to Azure Storage using Visual
Studio
8/2/2022 • 7 minutes to read • Edit Online
Azure Functions lets you connect Azure services and other resources to functions without having to write your
own integration code. These bindings, which represent both input and output, are declared within the function
definition. Data from bindings is provided to the function as parameters. A trigger is a special type of input
binding. Although a function has only one trigger, it can have multiple input and output bindings. To learn more,
see Azure Functions triggers and bindings concepts.
This article shows you how to use Visual Studio to connect the function you created in the previous quickstart
article to Azure Storage. The output binding that you add to this function writes data from the HTTP request to a
message in an Azure Queue storage queue.
Most bindings require a stored connection string that Functions uses to access the bound service. To make it
easier, you use the Storage account that you created with your function app. The connection to this account is
already stored in an app setting named AzureWebJobsStorage .
Prerequisites
Before you start this article, you must:
Complete part 1 of the Visual Studio quickstart.
Sign in to your Azure subscription from Visual Studio.
Install-Package Microsoft.Azure.WebJobs.Extensions.Storage
Now, you can add the storage output binding to your project.
Open the HttpExample.cs project file and add the following parameter to the Run method definition:
[FunctionName("HttpExample")]
public static async Task<IActionResult> Run(
[HttpTrigger(AuthorizationLevel.Anonymous, "get", "post", Route = null)] HttpRequest req,
[Queue("outqueue"),StorageAccount("AzureWebJobsStorage")] ICollector<string> msg,
ILogger log)
In-process
Isolated process
Add code that uses the msg output binding object to create a queue message. Add this code before the method
returns.
if (!string.IsNullOrEmpty(name))
{
// Add a message to the output collection.
msg.Add(name);
}
[FunctionName("HttpExample")]
public static async Task<IActionResult> Run(
[HttpTrigger(AuthorizationLevel.Anonymous, "get", "post", Route = null)] HttpRequest req,
[Queue("outqueue"),StorageAccount("AzureWebJobsStorage")] ICollector<string> msg,
ILogger log)
{
log.LogInformation("C# HTTP trigger function processed a request.");
if (!string.IsNullOrEmpty(name))
{
// Add a message to the output collection.
msg.Add(name);
}
return name != null
? (ActionResult)new OkObjectResult($"Hello, {name}")
: new BadRequestObjectResult("Please pass a name on the query string or in the request body");
}
Run the function locally
1. To run your function, press F5 in Visual Studio. You might need to enable a firewall exception so that the
tools can handle HTTP requests. Authorization levels are never enforced when you run a function locally.
2. Copy the URL of your function from the Azure Functions runtime output.
3. Paste the URL for the HTTP request into your browser's address bar. Append the query string
?name=<YOUR_NAME> to this URL and run the request. The following image shows the response in the
browser to the local GET request returned by the function:
3. Expand the Queues node, and then double-click the queue named outqueue to view the contents of the
queue in Visual Studio.
The queue contains the message that the queue output binding created when you ran the HTTP-triggered
function. If you invoked the function with the default name value of Azure, the queue message is Name
passed to the function: Azure.
4. Run the function again, send another request, and you'll see a new message appear in the queue.
Now, it's time to republish the updated function app to Azure.
Clean up resources
Other quickstarts in this collection build upon this quickstart. If you plan to work with subsequent quickstarts,
tutorials, or with any of the services you've created in this quickstart, don't clean up the resources.
Resources in Azure refer to function apps, functions, storage accounts, and so forth. They're grouped into
resource groups, and you can delete everything in a group by deleting the group.
You've created resources to complete these quickstarts. You might be billed for these resources, depending on
your account status and service pricing. If you don't need the resources anymore, here's how to delete them:
1. In the Azure portal, go to the Resource group page.
To get to that page from the function app page, select the Over view tab, and then select the link under
Resource group .
To get to that page from the dashboard, select Resource groups , and then select the resource group
that you used for this article.
2. In the Resource group page, review the list of included resources, and verify that they're the ones you
want to delete.
3. Select Delete resource group and follow the instructions.
Deletion might take a couple of minutes. When it's done, a notification appears for a few seconds. You can
also select the bell icon at the top of the page to view the notification.
Next steps
You've updated your HTTP triggered function to write data to a Storage queue. To learn more about developing
Functions, see Develop Azure Functions using Visual Studio.
Next, you should enable Application Insights monitoring for your function app:
Enable Application Insights integration
Connect Azure Functions to Azure Storage using
command line tools
8/2/2022 • 17 minutes to read • Edit Online
In this article, you integrate an Azure Storage queue with the function and storage account you created in the
previous quickstart article. You achieve this integration by using an output binding that writes data from an
HTTP request to a message in the queue. Completing this article incurs no additional costs beyond the few USD
cents of the previous quickstart. To learn more about bindings, see Azure Functions triggers and bindings
concepts.
2. Open local.settings.json file and locate the value named AzureWebJobsStorage , which is the Storage
account connection string. You use the name AzureWebJobsStorage and the connection string in other
sections of this article.
IMPORTANT
Because the local.settings.json file contains secrets downloaded from Azure, always exclude this file from source control.
The .gitignore file created with a local functions project excludes the file by default.
Now, you can add the storage output binding to your project.
"bindings": [
{
"authLevel": "function",
"type": "httpTrigger",
"direction": "in",
"name": "req",
"methods": [
"get",
"post"
]
},
{
"type": "http",
"direction": "out",
"name": "res"
}
]
"scriptFile": "__init__.py",
"bindings": [
{
"authLevel": "function",
"type": "httpTrigger",
"direction": "in",
"name": "req",
"methods": [
"get",
"post"
]
},
{
"type": "http",
"direction": "out",
"name": "$return"
}
"bindings": [
{
"authLevel": "function",
"type": "httpTrigger",
"direction": "in",
"name": "Request",
"methods": [
"get",
"post"
]
},
{
"type": "http",
"direction": "out",
"name": "Response"
}
]
Each binding has at least a type, a direction, and a name. In the above example, the first binding is of type
httpTrigger with the direction in . For the in direction, name specifies the name of an input parameter that's
sent to the function when invoked by the trigger.
The second binding in the collection is named res . This http binding is an output binding ( out ) that is used
to write the HTTP response.
To write to an Azure Storage queue from this function, add an out binding of type queue with the name msg ,
as shown in the code below:
{
"authLevel": "function",
"type": "httpTrigger",
"direction": "in",
"name": "req",
"methods": [
"get",
"post"
]
},
{
"type": "http",
"direction": "out",
"name": "res"
},
{
"type": "queue",
"direction": "out",
"name": "msg",
"queueName": "outqueue",
"connection": "AzureWebJobsStorage"
}
]
}
The second binding in the collection is of type http with the direction out , in which case the special name of
$return indicates that this binding uses the function's return value rather than providing an input parameter.
To write to an Azure Storage queue from this function, add an out binding of type queue with the name msg ,
as shown in the code below:
"bindings": [
{
"authLevel": "anonymous",
"type": "httpTrigger",
"direction": "in",
"name": "req",
"methods": [
"get",
"post"
]
},
{
"type": "http",
"direction": "out",
"name": "$return"
},
{
"type": "queue",
"direction": "out",
"name": "msg",
"queueName": "outqueue",
"connection": "AzureWebJobsStorage"
}
]
The second binding in the collection is named res . This http binding is an output binding ( out ) that is used
to write the HTTP response.
To write to an Azure Storage queue from this function, add an out binding of type queue with the name msg ,
as shown in the code below:
{
"authLevel": "function",
"type": "httpTrigger",
"direction": "in",
"name": "Request",
"methods": [
"get",
"post"
]
},
{
"type": "http",
"direction": "out",
"name": "Response"
},
{
"type": "queue",
"direction": "out",
"name": "msg",
"queueName": "outqueue",
"connection": "AzureWebJobsStorage"
}
]
}
In this case, msg is given to the function as an output argument. For a queue type, you must also specify the
name of the queue in queueName and provide the name of the Azure Storage connection (from local.settings.json
file) in connection .
In a C# project, the bindings are defined as binding attributes on the function method. Specific definitions
depend on whether your app runs in-process (C# class library) or in an isolated process.
In-process
Isolated process
Open the HttpExample.cs project file and add the following parameter to the Run method definition:
The msg parameter is an ICollector<T> type, representing a collection of messages written to an output
binding when the function completes. In this case, the output is a storage queue named outqueue . The
StorageAccountAttribute sets the connection string for the storage account. This attribute indicates the setting
that contains the storage account connection string and can be applied at the class, method, or parameter level.
In this case, you could omit StorageAccountAttribute because you're already using the default storage account.
The Run method definition must now look like the following code:
[FunctionName("HttpExample")]
public static async Task<IActionResult> Run(
[HttpTrigger(AuthorizationLevel.Anonymous, "get", "post", Route = null)] HttpRequest req,
[Queue("outqueue"),StorageAccount("AzureWebJobsStorage")] ICollector<string> msg,
ILogger log)
In a Java project, the bindings are defined as binding annotations on the function method. The function.json file
is then autogenerated based on these annotations.
Browse to the location of your function code under src/main/java, open the Function.java project file, and add
the following parameter to the run method definition:
@QueueOutput(name = "msg", queueName = "outqueue", connection = "AzureWebJobsStorage") OutputBinding<String>
msg
The msg parameter is an OutputBinding<T> type, which represents a collection of strings. These strings are
written as messages to an output binding when the function completes. In this case, the output is a storage
queue named outqueue . The connection string for the Storage account is set by the connection method. You
pass the application setting that contains the Storage account connection string, rather than passing the
connection string itself.
The run method definition must now look like the following example:
@FunctionName("HttpTrigger-Java")
public HttpResponseMessage run(
@HttpTrigger(name = "req", methods = {HttpMethod.GET, HttpMethod.POST}, authLevel =
AuthorizationLevel.FUNCTION)
HttpRequestMessage<Optional<String>> request,
@QueueOutput(name = "msg", queueName = "outqueue", connection = "AzureWebJobsStorage")
OutputBinding<String> msg, final ExecutionContext context) {
...
}
For more information on the details of bindings, see Azure Functions triggers and bindings concepts and queue
output configuration.
import logging
name = req.params.get('name')
if not name:
try:
req_body = req.get_json()
except ValueError:
pass
else:
name = req_body.get('name')
if name:
msg.set(name)
return func.HttpResponse(f"Hello {name}!")
else:
return func.HttpResponse(
"Please pass a name on the query string or in the request body",
status_code=400
)
The msg parameter is an instance of the azure.functions.Out class . The set method writes a string message
to the queue. In this case, it's the name passed to the function in the URL query string.
Add code that uses the msg output binding object on context.bindings to create a queue message. Add this
code before the context.res statement.
Add code that uses the msg output binding object on context.bindings to create a queue message. Add this
code before the context.res statement.
context.bindings.msg = name;
const httpTrigger: AzureFunction = async function (context: Context, req: HttpRequest): Promise<void> {
context.log('HTTP trigger function processed a request.');
const name = (req.query.name || (req.body && req.body.name));
if (name) {
// Add a message to the storage queue,
// which is the name passed to the function.
context.bindings.msg = name;
// Send a "hello" response.
context.res = {
// status: 200, /* Defaults to 200 */
body: "Hello " + (req.query.name || req.body.name)
};
}
else {
context.res = {
status: 400,
body: "Please pass a name on the query string or in the request body"
};
}
};
Add code that uses the Push-OutputBinding cmdlet to write text to the queue using the msg output binding. Add
this code before you set the OK status in the if statement.
$outputMsg = $name
Push-OutputBinding -name msg -Value $outputMsg
if ($name) {
# Write the $name value to the queue,
# which is the name passed to the function.
$outputMsg = $name
Push-OutputBinding -name msg -Value $outputMsg
$status = [HttpStatusCode]::OK
$body = "Hello $name"
}
else {
$status = [HttpStatusCode]::BadRequest
$body = "Please pass a name on the query string or in the request body."
}
In-process
Isolated process
Add code that uses the msg output binding object to create a queue message. Add this code before the method
returns.
if (!string.IsNullOrEmpty(name))
{
// Add a message to the output collection.
msg.Add(name);
}
if (!string.IsNullOrEmpty(name))
{
// Add a message to the output collection.
msg.Add(name);
}
return name != null
? (ActionResult)new OkObjectResult($"Hello, {name}")
: new BadRequestObjectResult("Please pass a name on the query string or in the request body");
}
Now, you can use the new msg parameter to write to the output binding from your function code. Add the
following line of code before the success response to add the value of name to the msg output binding.
msg.setValue(name);
When you use an output binding, you don't have to use the Azure Storage SDK code for authentication, getting a
queue reference, or writing data. The Functions runtime and queue output binding do those tasks for you.
Your run method must now look like the following example:
if (name == null) {
return request.createResponseBuilder(HttpStatus.BAD_REQUEST)
.body("Please pass a name on the query string or in the request body").build();
} else {
// Write the name to the message queue.
msg.setValue(name);
@SuppressWarnings("unchecked")
final OutputBinding<String> msg = (OutputBinding<String>)mock(OutputBinding.class);
final HttpResponseMessage ret = new Function().run(req, msg, context);
Observe that you don't need to write any code for authentication, getting a queue reference, or writing data. All
these integration tasks are conveniently handled in the Azure Functions runtime and queue output binding.
func start
Toward the end of the output, the following lines must appear:
...
Http Functions:
NOTE
If HttpExample doesn't appear as shown above, you likely started the host from outside the root folder of the
project. In that case, use Ctrl+C to stop the host, go to the project's root folder, and run the previous command
again.
2. Copy the URL of your HttpExample function from this output to a browser and append the query string
?name=<YOUR_NAME> , making the full URL like http://localhost:7071/api/HttpExample?name=Functions . The
browser should display a response message that echoes back your query string value. The terminal in
which you started your project also shows log output as you make requests.
3. When you're done, press Ctrl + C and type y to stop the functions host.
TIP
During startup, the host downloads and installs the Storage binding extension and other Microsoft binding extensions.
This installation happens because binding extensions are enabled by default in the host.json file with the following
properties:
{
"version": "2.0",
"extensionBundle": {
"id": "Microsoft.Azure.Functions.ExtensionBundle",
"version": "[1.*, 2.0.0)"
}
}
If you encounter any errors related to binding extensions, check that the above properties are present in host.json.
bash
PowerShell
Azure CLI
export AZURE_STORAGE_CONNECTION_STRING="<MY_CONNECTION_STRING>"
2. (Optional) Use the az storage queue list command to view the Storage queues in your account. The
output from this command must include a queue named outqueue , which was created when the function
wrote its first message to that queue.
3. Use the az storage message get command to read the message from this queue, which should be the
value you supplied when testing the function earlier. The command reads and removes the first message
from the queue.
bash
PowerShell
Azure CLI
echo `echo $(az storage message get --queue-name outqueue -o tsv --query '[].{Message:content}') |
base64 --decode`
Because the message body is stored base64 encoded, the message must be decoded before it's displayed.
After you execute az storage message get , the message is removed from the queue. If there was only one
message in outqueue , you won't retrieve a message when you run this command a second time and
instead get an error.
In the local project folder, use the following Maven command to republish your project:
mvn azure-functions:deploy
Verify in Azure
1. As in the previous quickstart, use a browser or CURL to test the redeployed function.
Browser
curl
Copy the complete Invoke URL shown in the output of the publish command into a browser address
bar, appending the query parameter &name=Functions . The browser should display the same output as
when you ran the function locally.
2. Examine the Storage queue again, as described in the previous section, to verify that it contains the new
message written to the queue.
Clean up resources
After you've finished, use the following command to delete the resource group and all its contained resources to
avoid incurring further costs.
Next steps
You've updated your HTTP triggered function to write data to a Storage queue. Now you can learn more about
developing Functions from the command line using Core Tools and Azure CLI:
Work with Azure Functions Core Tools
Azure Functions triggers and bindings
Examples of complete Function projects in C#.
Azure Functions C# developer reference
Examples of complete Function projects in JavaScript.
Azure Functions JavaScript developer guide
Examples of complete Function projects in TypeScript.
Azure Functions TypeScript developer guide
Examples of complete Function projects in Python.
Azure Functions Python developer guide
Examples of complete Function projects in PowerShell.
Azure Functions PowerShell developer guide
Tutorial: Create a function to integrate with Azure
Logic Apps
8/2/2022 • 8 minutes to read • Edit Online
Azure Functions integrates with Azure Logic Apps in the Logic Apps Designer. This integration allows you use
the computing power of Functions in orchestrations with other Azure and third-party services.
This tutorial shows you how to create a workflow to analyze Twitter activity. As tweets are evaluated, the
workflow sends notifications when positive sentiments are detected.
In this tutorial, you learn to:
Create a Cognitive Services API Resource.
Create a function that categorizes tweet sentiment.
Create a logic app that connects to Twitter.
Add sentiment detection to the logic app.
Connect the logic app to the function.
Send an email based on the response from the function.
Prerequisites
An active Twitter account.
An Outlook.com account (for sending notifications).
NOTE
If you want to use the Gmail connector, only G-Suite business accounts can use this connector without restrictions in logic
apps. If you have a Gmail consumer account, you can use the Gmail connector with only specific Google-approved apps
and services, or you can create a Google client app to use for authentication in your Gmail connector.
For more information, see Data security and privacy policies for Google connectors in Azure Logic Apps.
SET T IN G VA L UE REM A RK S
Resource group Create a new resource group named Later, you delete this resource
tweet-sentiment-tutorial group to remove all the resources
created during this tutorial.
Name TweetSentimentApp
Publish Code
SET T IN G VA L UE REM A RK S
using System;
using System.Net;
using Microsoft.AspNetCore.Mvc;
using Microsoft.Extensions.Logging;
using Microsoft.Extensions.Primitives;
using Newtonsoft.Json;
A sentiment score is passed into the function, which returns a category name for the value.
6. Select the Save button on the toolbar to save your changes.
NOTE
To test the function, select Test/Run from the top menu. On the Input tab, enter a value of 0.9 in the Body
input box, and then select Run . Verify that a value of Positive is returned in the HTTP response content box in the
Output section.
Next, create a logic app that integrates with Azure Functions, Twitter, and the Cognitive Services API.
SET T IN G SUGGEST ED VA L UE
Connect to Twitter
Create a connection to Twitter so your app can poll for new tweets.
1. Search for Twitter in the top search box.
2. Select the Twitter icon.
3. Select the When a new tweet is posted trigger.
4. Enter the following values to set up the connection.
SET T IN G VA L UE
5. Select Sign in .
6. Follow the prompts in the pop-up window to complete signing in to Twitter.
7. Next, enter the following values in the When a new tweet is posted box.
SET T IN G VA L UE
How often do you want to check for items? 1 in the textbox, and
Hour in the dropdown. You may enter different values
but be sure to review the current limitations of the
Twitter connector.
SET T IN G VA L UE
Account Key Paste in the Text Analytics account key you set aside
earlier.
Site URL Paste in the Text Analytics endpoint you set aside earlier.
5. Select Create .
6. Click inside the Add new parameter box, and check the box next to documents that appears in the pop-
up.
7. Click inside the documents Id - 1 textbox to open the dynamic content pop-up.
8. In the dynamic content search box, search for id , and click on Tweet id .
9. Click inside the documents Text - 1 textbox to open the dynamic content pop-up.
10. In the dynamic content search box, search for text , and click on Tweet text .
11. In Choose an action , type Text Analytics , and then click the Detect sentiment action.
12. Select the Save button on the toolbar to save your progress.
The Detect Sentiment box should look like the following screenshot.
Connect sentiment output to function endpoint
1. Select New step .
2. Search for Azure Functions in the search box.
3. Select the Azure Functions icon.
4. Search for your function name in the search box. If you followed the guidance above, your function name
begins with TweetSentimentAPI .
5. Select the function icon.
6. Select the TweetSentimentFunction item.
7. Click inside the Request Body box, and select the Detect Sentiment score item from the pop-up window.
8. Select the Save button on the toolbar to save your progress.
Clean up resources
To clean up all the Azure services and accounts created during this tutorial, delete the resource group.
1. Search for Resource groups in the top search box.
2. Select the tweet-sentiment-tutorial .
3. Select Delete resource group
4. Enter tweet-sentiment-tutorial in the text box.
5. Select the Delete button.
Optionally, you may want to return to your Twitter account and delete any test tweets from your feed.
Next steps
Create a serverless API using Azure Functions
Quickstart: Create a function in Azure with Python
using Visual Studio Code
8/2/2022 • 7 minutes to read • Edit Online
In this article, you use Visual Studio Code to create a Python function that responds to HTTP requests. After
testing the code locally, you deploy it to the serverless environment of Azure Functions.
Completing this quickstart incurs a small cost of a few USD cents or less in your Azure account.
There's also a CLI-based version of this article.
2. Choose the directory location for your project workspace and choose Select . You should either create a
new folder or choose an empty folder for the project workspace. Don't choose a project folder that is
already part of a workspace.
3. Provide the following information at the prompts:
P RO M P T SEL EC T IO N
Select a Python interpreter to create a vir tual Choose your preferred Python interpreter. If an option
environment isn't shown, type in the full path to your Python binary.
Select a template for your project's first function Choose HTTP trigger .
Select how you would like to open your project Choose Add to workspace .
4. Visual Studio Code uses the provided information and generates an Azure Functions project with an HTTP
trigger. You can view the local project files in the Explorer. For more information about the files that are
created, see Generated project files.
If you have trouble running on Windows, make sure that the default terminal for Visual Studio Code isn't
set to WSL Bash .
2. With Core Tools still running in Terminal , choose the Azure icon in the activity bar. In the Workspace
area, expand Local Project > Functions . Right-click (Windows) or Ctrl - click (macOS) the
HttpExample function and choose Execute Function Now... .
3. In Enter request body you see the request message body value of { "name": "Azure" } . Press Enter to
send this request message to your function.
4. When the function executes locally and returns a response, a notification is raised in Visual Studio Code.
Information about the function execution is shown in Terminal panel.
5. With the Terminal panel focused, press Ctrl + C to stop Core Tools and disconnect the debugger.
After you've verified that the function runs correctly on your local computer, it's time to use Visual Studio Code
to publish the project directly to Azure.
Sign in to Azure
Before you can publish your app, you must sign in to Azure.
1. If you aren't already signed in, choose the Azure icon in the Activity bar. Then in the Resources area,
choose Sign in to Azure....
If you're already signed in and can see your existing subscriptions, go to the next section. If you don't yet
have an Azure account, choose Create and Azure Account.... Students can choose Create and Azure
for Students Account....
2. When prompted in the browser, choose your Azure account and sign in using your Azure account
credentials. If you create a new account, you can sign in after your account is created.
3. After you've successfully signed in, you can close the new browser window. The subscriptions that belong
to your Azure account are displayed in the sidebar.
P RO M P T SEL EC T IO N
Select subscription Choose the subscription to use. You won't see this
prompt when you have only one subscription visible
under Resources .
Enter a globally unique name for the function Type a name that is valid in a URL path. The name you
app type is validated to make sure that it's unique in Azure
Functions.
Select a runtime stack Choose the language version on which you've been
running locally.
Select a location for new resources For better performance, choose a region near you.
The extension shows the status of individual resources as they're being created in Azure in the Azure:
Activity Log panel.
3. When the creation is complete, the following Azure resources are created in your subscription. The
resources are named based on your function app name:
A resource group, which is a logical container for related resources.
A standard Azure Storage account, which maintains state and other information about your projects.
A function app, which provides the environment for executing your function code. A function app lets
you group functions as a logical unit for easier management, deployment, and sharing of resources
within the same hosting plan.
An App Service plan, which defines the underlying host for your function app.
An Application Insights instance connected to the function app, which tracks usage of your functions
in the app.
A notification is displayed after your function app is created and the deployment package is applied.
TIP
By default, the Azure resources required by your function app are created based on the function app name you
provide. By default, they're also created in the same new resource group with the function app. If you want to
either customize the names of these resources or reuse existing resources, you need to publish the project with
advanced create options instead.
1. Choose the Azure icon in the Activity bar, then in the Workspace area, select your project folder and
select the Deploy... button.
2. Select Deploy to Function App..., choose the function app you just created, and select Deploy .
3. After deployment completes, select View Output to view the creation and deployment results, including
the Azure resources that you created. If you miss the notification, select the bell icon in the lower right
corner to see it again.
Run the function in Azure
1. Back in the Resources area in the side bar, expand your subscription, your new function app, and
Functions . Right-click (Windows) or Ctrl - click (macOS) the HttpExample function and choose
Execute Function Now....
2. In Enter request body you see the request message body value of { "name": "Azure" } . Press Enter to
send this request message to your function.
3. When the function executes in Azure and returns a response, a notification is raised in Visual Studio Code.
Clean up resources
When you continue to the next step and add an Azure Storage queue binding to your function, you'll need to
keep all your resources in place to build on what you've already done.
Otherwise, you can use the following steps to delete the function app and its related resources to avoid
incurring any further costs.
1. In Visual Studio Code, press F1 to open the command palette. In the command palette, search for and
select Azure: Open in portal .
2. Choose your function app and press Enter. The function app page opens in the Azure portal.
3. In the Over view tab, select the named link next to Resource group .
4. On the Resource group page, review the list of included resources, and verify that they're the ones you
want to delete.
5. In the Resource group page, review the list of included resources, and verify that they are the ones you
want to delete.
6. Select Delete resource group , and follow the instructions.
Deletion may take a couple of minutes. When it's done, a notification appears for a few seconds. You can
also select the bell icon at the top of the page to view the notification.
For more information about Functions costs, see Estimating Consumption plan costs.
Next steps
You have used Visual Studio Code to create a function app with a simple HTTP-triggered function. In the next
article, you expand that function by connecting to Azure Storage. To learn more about connecting to other Azure
services, see Add bindings to an existing function in Azure Functions.
Connect to an Azure Storage queue
Having issues? Let us know.
Create serverless APIs in Visual Studio using Azure
Functions and API Management integration
(preview)
8/2/2022 • 9 minutes to read • Edit Online
REST APIs are often described using an OpenAPI definition. This file contains information about operations in an
API and how the request and response data for the API should be structured.
In this tutorial, you learn how to:
Create a serverless function project in Visual Studio
Test function APIs locally using built-in OpenAPI functionality
Publish project to a function app in Azure with API Management integration
Get the access key for the function and set it in API Management
Download the OpenAPI definition file
The serverless function you create provides an API that lets you determine whether an emergency repair on a
wind turbine is cost-effective. Because both the function app and API Management instance you create use
consumption plans, your cost for completing this tutorial is minimal.
NOTE
The OpenAPI and API Management integration featured in this article is currently in preview. This method for exposing a
serverless API is only supported for in-process C# class library functions. Isolated process C# class library functions and all
other language runtimes should instead use Azure API Management integration from the portal.
Prerequisites
Visual Studio 2022. Make sure you select the Azure development workload during installation.
An active Azure subscription, create a free account before you begin.
Function template HTTP trigger with OpenAPI This value creates a function
triggered by an HTTP request, with
the ability to generate an OpenAPI
definition file.
Use Azurite for runtime Selected You can use the emulator for local
storage account development of HTTP trigger
(AzureWebJobsStorage) functions. Because a function app in
Azure requires a storage account,
one is assigned or created when you
publish your project to Azure.
5. Select Create to create the function project and HTTP trigger function, with support for OpenAPI.
Visual Studio creates a project and class named Function1 that contains boilerplate code for the HTTP trigger
function type. Next, you replace this function template code with your own customized code.
The function then calculates how much a repair costs, and how much revenue the turbine could make in a 24-
hour period. Parameters are supplied either in the query string or in the payload of a POST request.
In the Function1.cs project file, replace the contents of the generated class library code with the following code:
using System;
using System.IO;
using System.Net;
using System.Threading.Tasks;
using Microsoft.AspNetCore.Http;
using Microsoft.AspNetCore.Mvc;
using Microsoft.Azure.WebJobs;
using Microsoft.Azure.WebJobs.Extensions.Http;
using Microsoft.Azure.WebJobs.Extensions.OpenApi.Core.Attributes;
using Microsoft.Azure.WebJobs.Extensions.OpenApi.Core.Enums;
using Microsoft.Extensions.Logging;
using Microsoft.OpenApi.Models;
using Newtonsoft.Json;
namespace TurbineRepair
{
public static class Turbines
{
const double revenuePerkW = 0.12;
const double technicianCost = 250;
const double turbineCost = 100;
[FunctionName("TurbineRepair")]
[OpenApiOperation(operationId: "Run")]
[OpenApiSecurity("function_key", SecuritySchemeType.ApiKey, Name = "code", In =
OpenApiSecurityLocationType.Query)]
[OpenApiRequestBody("application/json", typeof(RequestBodyModel),
Description = "JSON request body containing { hours, capacity}")]
[OpenApiResponseWithBody(statusCode: HttpStatusCode.OK, contentType: "application/json", bodyType:
typeof(string),
Description = "The OK response message containing a JSON result.")]
public static async Task<IActionResult> Run(
[HttpTrigger(AuthorizationLevel.Function, "post", Route = null)] HttpRequest req,
ILogger log)
{
// Get request body data.
string requestBody = await new StreamReader(req.Body).ReadToEndAsync();
dynamic data = JsonConvert.DeserializeObject(requestBody);
int? capacity = data?.capacity;
int? hours = data?.hours;
This function code returns a message of Yes or No to indicate whether an emergency repair is cost-effective. It
also returns the revenue opportunity that the turbine represents and the cost to fix the turbine.
3. Select POST > Tr y it out , enter values for hours and capacity either as query parameters or in the
JSON request body, and select Execute .
4. When you enter integer values like 6 for hours and 2500 for capacity , you get a JSON response that
looks like the following example:
Now you have a function that determines the cost-effectiveness of emergency repairs. Next, you publish your
project and API definitions to Azure.
4. Create a new instance using the values specified in the following table:
Resource group Name of your resource group The resource group in which to
create your function app. Select an
existing resource group from the
drop-down list or choose New to
create a new resource group.
5. Select Create to create a function app and its related resources in Azure. Status of resource creation is
shown in the lower left of the window.
6. Back in Functions instance , make sure that Run from package file is checked. Your function app is
deployed using Zip Deploy with Run-From-Package mode enabled. This deployment method is
recommended for your functions project, since it results in better performance.
7. Select Next , and in API Management page, also choose + Create an API Management API .
8. Create an API in API Management by using values in the following table:
SET T IN G VA L UE DESC RIP T IO N
Resource group Name of your resource group Select the same resource group as
your function app from the drop-
down list.
API Management ser vice New instance Select New to create a new API
Management instance in the
serverless tier.
9. Select Create to create the API Management instance with the TurbineRepair API from the function
integration.
10. select Finish , verify the Publish page says Ready to publish , and then select Publish to deploy the
package containing your project files to your new function app in Azure.
After the deployment completes, the root URL of the function app in Azure is shown in the Publish tab.
Get the function access key
1. In the Publish tab, select the ellipses (...) next to Hosting and select Open in Azure por tal . The
function app you created is opened in the Azure portal in your default browser.
2. Under Functions , select Functions > TurbineRepair then select Function keys .
3. Under Function keys , select default and copy the value . You can now set this key in API Management
so that it can access the function endpoint.
3. Below Inbound processing , in Set quer y parameters , type code for Name , select +Value , paste in
the copied function key, and select Save . API Management includes the function key when it passes calls
through to the function endpoint.
Now that the function key is set, you can call the API to verify that it works when hosted in Azure.
{
"hours": "6",
"capacity": "2500"
}
As before, you can also provide the same values as query parameters.
2. Select Send , and then view the HTTP response to verify the same results are returned from the API.
Clean up resources
In the preceding steps, you created Azure resources in a resource group. If you don't expect to need these
resources in the future, you can delete them by deleting the resource group.
From the Azure portal menu or Home page, select Resource groups . Then, on the Resource groups page,
select the group you created.
On the myResourceGroup page, make sure that the listed resources are the ones you want to delete.
Select Delete resource group , type the name of your group in the text box to confirm, and then select Delete .
Next steps
You've used Visual Studio 2022 to create a function that is self-documenting because of the OpenAPI Extension
and integrated with API Management. You can now refine the definition in API Management in the portal. You
can also learn more about API Management.
Edit the OpenAPI definition in API Management
Tutorial: Integrate Azure Functions with an Azure
virtual network by using private endpoints
8/2/2022 • 16 minutes to read • Edit Online
This tutorial shows you how to use Azure Functions to connect to resources in an Azure virtual network by using
private endpoints. You'll create a function by using a storage account that's locked behind a virtual network. The
virtual network uses a Service Bus queue trigger.
In this tutorial, you'll:
Create a function app in the Premium plan.
Create Azure resources, such as the Service Bus, storage account, and virtual network.
Lock down your storage account behind a private endpoint.
Lock down your Service Bus behind a private endpoint.
Deploy a function app that uses both the Service Bus and HTTP triggers.
Lock down your function app behind a private endpoint.
Test to see that your function app is secure inside the virtual network.
Clean up resources.
Function App name Globally unique name Name that identifies your new
function app. Valid characters are
a-z (case insensitive), 0-9 , and
- .
4. Select Next: Hosting . On the Hosting page, enter the following settings.
5. Select Next: Monitoring . On the Monitoring page, enter the following settings.
4. On the IP Addresses tab, select Add subnet . Use the following table to configure the subnet settings.
SET T IN G SUGGEST ED VA L UE DESC RIP T IO N
4. On the Resource tab, use the private endpoint settings shown in the following table.
8. Create another private endpoint for tables. On the Resources tab, use the settings shown in the
following table. For all other settings, use the same values you used to create the private endpoint for
files.
9. After the private endpoints are created, return to the Firewalls and vir tual networks section of your
storage account.
10. Ensure Selected networks is selected. It's not necessary to add an existing virtual network.
Resources in the virtual network can now communicate with the storage account using the private endpoint.
4. On the Resource tab, use the private endpoint settings shown in the following table.
11. Select Add your client IP address to give your current client IP access to the namespace.
NOTE
Allowing your client IP address is necessary to enable the Azure portal to publish messages to the queue later in
this tutorial.
Create a queue
Create the queue where your Azure Functions Service Bus trigger will get events:
1. In your Service Bus, in the menu on the left, select Queues .
2. Select Queue . For the purposes of this tutorial, provide the name queue as the name of the new queue.
3. Select Create .
1. In GitHub, go to the following sample repository. It contains a function app and two functions, an HTTP
trigger, and a Service Bus queue trigger.
https://github.com/Azure-Samples/functions-vnet-tutorial
2. At the top of the page, select Fork to create a fork of this repository in your own GitHub account or
organization.
3. In your function app, in the menu on the left, select Deployment Center . Then select Settings .
4. On the Settings tab, use the deployment settings shown in the following table.
5. Select Save .
6. Your initial deployment might take a few minutes. When your app is successfully deployed, on the Logs
tab, you see a Success (Active) status message. If necessary, refresh the page.
Congratulations! You've successfully deployed your sample function app.
7. On the Live metrics tab, you should see that your Service Bus queue trigger has fired. If it hasn't, resend
the message from Ser vice Bus Explorer .
Congratulations! You've successfully tested your function app setup with private endpoints.
Clean up resources
In the preceding steps, you created Azure resources in a resource group. If you don't expect to need these
resources in the future, you can delete them by deleting the resource group.
From the Azure portal menu or Home page, select Resource groups . Then, on the Resource groups page,
select myResourceGroup .
On the myResourceGroup page, make sure that the listed resources are the ones you want to delete.
Select Delete resource group , type myResourceGroup in the text box to confirm, and then select Delete .
Next steps
In this tutorial, you created a Premium function app, storage account, and Service Bus. You secured all of these
resources behind private endpoints.
Use the following links to learn more Azure Functions networking options and private endpoints:
Networking options in Azure Functions
Azure Functions Premium plan
Service Bus private endpoints
Azure Storage private endpoints
Tutorial: Establish Azure Functions private site
access
8/2/2022 • 9 minutes to read • Edit Online
This tutorial shows you how to enable private site access with Azure Functions. By using private site access, you
can require that your function code is only triggered from a specific virtual network.
Private site access is useful in scenarios when access to the function app needs to be limited to a specific virtual
network. For example, the function app may be applicable to only employees of a specific organization, or
services which are within the specified virtual network (such as another Azure Function, Azure Virtual Machine,
or an AKS cluster).
If a Functions app needs to access Azure resources within the virtual network, or connected via service
endpoints, then virtual network integration is needed.
In this tutorial, you learn how to configure private site access for your function app:
Create a virtual machine
Create an Azure Bastion service
Create an Azure Functions app
Configure a virtual network service endpoint
Create and deploy an Azure Function
Invoke the function from outside and within the virtual network
If you don’t have an Azure subscription, create a free account before you begin.
Topology
The following diagram shows the architecture of the solution to be created:
Prerequisites
For this tutorial, it's important that you understand IP addressing and subnetting. You can start with this article
that covers the basics of addressing and subnetting. Many more articles and videos are available online.
5. Choose the Networking tab and select Create new to configure a new virtual network.
6. In Create virtual network, use the settings in the table below the image:
SET T IN G SUGGEST ED VA L UE DESC RIP T IO N
Address range (subnet) 10.10.1.0/24 The subnet size defines how many
interfaces can be added to the
subnet. This subnet is used by the
VM. A /24 subnet provides 254
host addresses.
NOTE
For a detailed, step-by-step guide to creating an Azure Bastion resource, refer to the Create an Azure Bastion host
tutorial.
5. Create a subnet in which Azure can provision the Azure Bastion host. Choosing Manage subnet
configuration opens a new pane where you can define a new subnet. Choose + Subnet to create a new
subnet.
6. The subnet must be of the name AzureBastionSubnet and the subnet prefix must be at least /27 . Select
OK to create the subnet.
7. On the Create a Bastion page, select the newly created AzureBastionSubnet from the list of available
subnets.
8. Select Review & Create . Once validation completes, select Create . It will take a few minutes for the
Azure Bastion resource to be created.
Function App name Globally unique name Name that identifies your new
function app. Valid characters are a-
z (case insensitive), 0-9, and -.
NOTE
It may take several minutes to enable the service endpoint.
7. The Access Restrictions page now shows that there is a new restriction. It may take a few seconds for the
Endpoint status to change from Disabled through Provisioning to Enabled.
IMPORTANT
Each function app has an Advanced Tool (Kudu) site that is used to manage function app deployments. This site is
accessed from a URL like: <FUNCTION_APP_NAME>.scm.azurewebsites.net . Enabling access restrictions on the
Kudu site prevents the deployment of the project code from a local developer workstation, and then an agent is
needed within the virtual network to perform the deployment.
If you try to access the function app now from your computer outside of your virtual network, you'll
receive an HTTP 403 page indicating that access is forbidden.
2. Return to the resource group and select the previously created virtual machine. In order to access the site
from the VM, you need to connect to the VM via the Azure Bastion service.
3. Select Connect and then choose Bastion .
4. Provide the required username and password to log into the virtual machine.
5. Select Connect . A new browser window will pop up to allow you to interact with the virtual machine. It's
possible to access the site from the web browser on the VM because the VM is accessing the site through
the virtual network. While the site is only accessible from within the designated virtual network, a public
DNS entry remains.
Create a function
The next step in this tutorial is to create an HTTP-triggered Azure Function. Invoking the function via an HTTP
GET or POST should result in a response of "Hello, {name}".
1. Follow one of the following quickstarts to create and deploy your Azure Functions app.
Visual Studio Code
Visual Studio
Command line
Maven (Java)
2. When publishing your Azure Functions project, choose the function app resource that you created earlier
in this tutorial.
3. Verify the function is deployed.
2. Paste the URL into a web browser. When you now try to access the function app from a computer outside
of your virtual network, you receive an HTTP 403 response indicating access to the app is forbidden.
Next steps
Learn more about the networking options in Functions
Tutorial: Control Azure Functions outbound IP with
an Azure virtual network NAT gateway
8/2/2022 • 9 minutes to read • Edit Online
Virtual network address translation (NAT) simplifies outbound-only internet connectivity for virtual networks.
When configured on a subnet, all outbound connectivity uses your specified static public IP addresses. An NAT
can be useful for Azure Functions or Web Apps that need to consume a third-party service that uses an allowlist
of IP address as a security measure. To learn more, see What is Virtual Network NAT?.
This tutorial shows you how to use virtual network NATs to route outbound traffic from an HTTP triggered
function. This function lets you check its own outbound IP address. During this tutorial, you'll:
Create a virtual network
Create a Premium plan function app
Create a public IP address
Create a NAT gateway
Configure function app to route outbound traffic through the NAT gateway
Topology
The following diagram shows the architecture of the solution that you create:
Functions running in the Premium plan have the same hosting capabilities as web apps in Azure App Service,
which includes the VNet Integration feature. To learn more about VNet Integration, including troubleshooting
and advanced configuration, see Integrate your app with an Azure virtual network.
Prerequisites
For this tutorial, it's important that you understand IP addressing and subnetting. You can start with this article
that covers the basics of addressing and subnetting. Many more articles and videos are available online.
If you don’t have an Azure subscription, create a free account before you begin.
If you've already completed the integrate Functions with an Azure virtual network tutorial, you can skip to
Create an HTTP trigger function.
Create a virtual network
1. From the Azure portal menu, select Create a resource . From the Azure Marketplace, select
Networking > Vir tual network .
2. In Create vir tual network , enter or select the settings specified as shown in the following table:
SET T IN G VA L UE
3. Select Next: IP Addresses , and for IPv4 address space , enter 10.10.0.0/16.
4. Select Add subnet , then enter Tutorial-Net for Subnet name and 10.10.1.0/24 for Subnet address
range .
5. Select Add , then select Review + create . Leave the rest as default and select Create .
6. In Create vir tual network , select Create .
Next, you create a function app in the Premium plan. This plan provides serverless scale while supporting virtual
network integration.
NOTE
For the best experience in this tutorial, choose .NET for runtime stack and choose Windows for operating system. Also,
create you function app in the same region as your virtual network.
1. From the Azure portal menu or the Home page, select Create a resource .
2. In the New page, select Compute > Function App .
3. On the Basics page, use the function app settings as specified in the following table:
Function App name Globally unique name Name that identifies your new
function app. Valid characters are
a-z (case insensitive), 0-9 , and
- .
4. Select Next: Hosting . On the Hosting page, enter the following settings:
5. Select Next: Monitoring . On the Monitoring page, enter the following settings:
Vir tual Network MyResourceGroup-vnet This virtual network is the one you
created earlier.
Vir tual network address block 10.10.0.0/16 You should only have one address
block defined.
Subnet Address Block 10.10.2.0/24 The subnet size restricts the total
number of instances that your
Premium plan function app can
scale out to. This example uses a
/24 subnet with 254 available
host addresses. This subnet is over-
provisioned, but easy to calculate.
4. Select OK to add the subnet. Close the VNet Integration and Network Feature Status pages to return
to your function app page.
The function app can now access the virtual network. Next, you'll add an HTTP-triggered function to the function
app.
#r "Newtonsoft.Json"
using System.Net;
using Microsoft.AspNetCore.Mvc;
using Microsoft.Extensions.Primitives;
using Newtonsoft.Json;
This code calls an external website that returns the IP address of the caller, which in this case is this
function. This method lets you easily determine the outbound IP address being used by your function
app.
Now you're ready to run the function and check the current outbound IPs.
4. Verify that IP address in the HTTP response body is one of the values from the outbound IP addresses you
viewed earlier.
Now, you can create a public IP and use a NAT gateway to modify this outbound IP address.
Create public IP
1. From your resource group, select Add , search the Azure Marketplace for Public IP address , and select
Create . Use the settings in the table below the image:
SET T IN G SUGGEST ED VA L UE
IP Version IPv4
SKU Standard
Tier Regional
Name Outbound-IP
2. Select Next: Outbound IP . In the Public IP addresses field, select the previously created public IP
address. Leave Public IP Prefixes unselected.
3. Select Next: Subnet . Select the myResourceGroup-vnet resource in the Vir tual network field and
Function-Net subnet.
4. Select Review + Create then Create to submit the deployment.
Once the deployment completes, the NAT gateway is ready to route traffic from your function app subnet to the
Internet.
F IEL D N A M E VA L UE
Name WEBSITE_VNET_ROUTE_ALL
Value 1
Clean up resources
You created resources to complete this tutorial. You'll be billed for these resources, depending on your account
status and service pricing. To avoid incurring extra costs, delete the resources when you know longer need them.
1. In the Azure portal, go to the Resource group page.
To get to that page from the function app page, select the Over view tab, and then select the link under
Resource group .
To get to that page from the dashboard, select Resource groups , and then select the resource group
that you used for this article.
2. In the Resource group page, review the list of included resources, and verify that they're the ones you
want to delete.
3. Select Delete resource group and follow the instructions.
Deletion might take a couple of minutes. When it's done, a notification appears for a few seconds. You can
also select the bell icon at the top of the page to view the notification.
Next steps
Azure Functions networking options
Tutorial: Create a function app that connects to
Azure services using identities instead of secrets
8/2/2022 • 13 minutes to read • Edit Online
This tutorial shows you how to configure a function app using Azure Active Directory identities instead of
secrets or connection strings, where possible. Using identities helps you avoid accidentally leaking sensitive
secrets and can provide better visibility into how data is accessed. To learn more about identity-based
connections, see configure an identity-based connection.
While the procedures shown work generally for all languages, this tutorial currently supports C# class library
functions on Windows specifically.
In this tutorial, you learn how to:
Create a function app in Azure using an ARM template
Enable both system-assigned and user-assigned managed identities on the function app
Create role assignments that give permissions to other resources
Move secrets that can't be replaced with identities into Azure Key Vault
Configure an app to connect to the default host storage using its managed identity
After you complete this tutorial, you should complete the follow-on tutorial that shows how to use identity-
based connections instead of secrets with triggers and bindings.
Prerequisites
An Azure account with an active subscription. Create an account for free.
The .NET Core 3.1 SDK
The Azure Functions Core Tools version 3.x.
Create a function app that uses Key Vault for necessary secrets
Azure Files is an example of a service that does not yet support Azure Active Directory authentication for SMB
file shares. Azure Files is the default file system for Windows deployments on Premium and Consumption plans.
While we could remove Azure Files entirely, this introduces limitations you may not want. Instead, you will move
the Azure Files connection string into Azure Key Vault. That way it is centrally managed, with access controlled
by the identity.
Create an Azure Key Vault
First you will need a key vault to store secrets in. You will configure it to use Azure role-based access control
(RBAC) for determining who can read secrets from the vault.
1. In the Azure portal, choose Create a resource (+) .
2. On the Create a resource page, select Security > Key Vault .
3. On the Basics page, use the following table to configure the key vault.
Key vault name Globally unique name Name that identifies your new key
vault. The vault name must only
contain alphanumeric characters
and dashes and cannot start with a
number.
Name Globally unique name Name that identifies your new user-
assigned identity.
4. Select Review + create . Review the configuration, and then click Create .
5. When the identity is created, navigate to it in the portal. Select Proper ties , and make note of the
Resource ID , as you will need it later.
6. Select Azure Role Assignments , and click Add role assignment (Preview) .
7. In the Add role assignment (Preview) page, use options as shown in the table below.
Resource Your key vault The key vault you created earlier.
8. Select Save . It might take a minute or two for the role to show up when you refresh the role assignments
list for the identity.
The identity will now be able to read secrets stored in the key vault. Later in the tutorial, you will add additional
role assignments for different purposes.
Generate a template for creating a function app
The portal experience for creating a function app does not interact with Azure Key Vault, so you will need to
generate and edit and Azure Resource Manager template. You can then use this template to create your function
app referencing the Azure Files connection string from your key vault.
IMPORTANT
Don't create the function app until after you edit the ARM template. The Azure Files configuration needs to be set up at
app creation time.
Function App name Globally unique name Name that identifies your new
function app. Valid characters are
a-z (case insensitive), 0-9 , and
- .
4. Select Review + create . Your app uses the default values on the Hosting and Monitoring page. You're
welcome to review the default options, and they'll be included in the ARM template that you generate.
5. Instead of creating your function app here, choose Download a template for automation , which is to
the right of the Next button.
6. In the template page, select Deploy , then in the Custom deployment page, select Edit template .
NOTE
If you were to create a full template for automation, you would want to include definitions for the identity and role
assignment resources, with the appropriate dependsOn clauses. This would replace the earlier steps which used the
portal. Consult the Azure Resource Manager guidance and the documentation for each service.
1. In the editor, find where the resources array begins. Before the function app definition, add the following
section which puts the Azure Files connection string into Key Vault. Substitute "VAULT_NAME" with the
name of your key vault.
{
"type": "Microsoft.KeyVault/vaults/secrets",
"apiVersion": "2016-10-01",
"name": "VAULT_NAME/azurefilesconnectionstring",
"properties": {
"value": "
[concat('DefaultEndpointsProtocol=https;AccountName=',parameters('storageAccountName'),';AccountKey='
,listKeys(resourceId('Microsoft.Storage/storageAccounts', parameters('storageAccountName')), '2019-
06-01').keys[0].value,';EndpointSuffix=','core.windows.net')]"
},
"dependsOn": [
"[concat('Microsoft.Storage/storageAccounts/', parameters('storageAccountName'))]"
]
},
2. In the definition for the function app resource (which has type set to ), add
Microsoft.Web/sites
Microsoft.KeyVault/vaults/VAULT_NAME/secrets/azurefilesconnectionstring to the dependsOn array. Again
substitute "VAULT_NAME" with the name of your key vault. This makes it so your app will not be created
before that secret is defined. The dependsOn array should look like the following example.
{
"type": "Microsoft.Web/sites",
"apiVersion": "2018-11-01",
"name": "[parameters('name')]",
"location": "[parameters('location')]",
"tags": null,
"dependsOn": [
"microsoft.insights/components/idcxntut",
"Microsoft.KeyVault/vaults/VAULT_NAME/secrets/azurefilesconnectionstring",
"[concat('Microsoft.Web/serverfarms/', parameters('hostingPlanName'))]",
"[concat('Microsoft.Storage/storageAccounts/', parameters('storageAccountName'))]"
],
// ...
}
3. Add the identity block from the following example into the definition for your function app resource.
Substitute "IDENTITY_RESOURCE_ID" for the resource ID of your user-assigned identity.
{
"apiVersion": "2018-11-01",
"name": "[parameters('name')]",
"type": "Microsoft.Web/sites",
"kind": "functionapp",
"location": "[parameters('location')]",
"identity": {
"type": "SystemAssigned,UserAssigned",
"userAssignedIdentities": {
"IDENTITY_RESOURCE_ID": {}
}
},
"tags": null,
// ...
}
This identity block also sets up a system-assigned identity which you will use later in this tutorial.
4. Add the keyVaultReferenceIdentity property to the properties object for the function app as in the
below example. Substitute "IDENTITY_RESOURCE_ID" for the resource ID of your user-assigned identity.
{
// ...
"properties": {
"name": "[parameters('name')]",
"keyVaultReferenceIdentity": "IDENTITY_RESOURCE_ID",
// ...
}
}
You need this configuration because an app could have multiple user-assigned identities configured.
Whenever you want to use a user-assigned identity, you have to specify which one through some ID. That
isn't true of system-assigned identities, since an app will only ever have one. Many features that use
managed identity assume they should use the system-assigned one by default.
5. Now find the JSON objects that defines the WEBSITE_CONTENTAZUREFILECONNECTIONSTRING application setting,
which should look like the following example:
{
"name": "WEBSITE_CONTENTAZUREFILECONNECTIONSTRING",
"value": "
[concat('DefaultEndpointsProtocol=https;AccountName=',parameters('storageAccountName'),';AccountKey='
,listKeys(resourceId('Microsoft.Storage/storageAccounts', parameters('storageAccountName')), '2019-
06-01').keys[0].value,';EndpointSuffix=','core.windows.net')]"
},
6. Replace the value field with a reference to the secret as shown in the following example. Substitute
"VAULT_NAME" with the name of your key vault.
{
"name": "WEBSITE_CONTENTAZUREFILECONNECTIONSTRING",
"value": "[concat('@Microsoft.KeyVault(SecretUri=',
reference(resourceId('Microsoft.KeyVault/vaults/secrets', 'VAULT_NAME',
'azurefilesconnectionstring')).secretUri, ')')]"
},
TIP
The Application Insights connection string and its included instrumentation key are not considered secrets and can be
retrieved from App Insights using Reader permissions. You do not need to move them into Key Vault, although you
certainly can.
IMPORTANT
The AzureWebJobsStorage configuration is used by some triggers and bindings, and those extensions must be able to
use identity-based connections, too. Apps that use blob triggers or event hub triggers may need to update those
extensions. Because no functions have been defined for this app, there isn't a concern yet. To learn more about this
requirement, see Connecting to host storage with an identity (Preview).
Similarly, AzureWebJobsStorage is used for deployment artifacts when using server-side build in Linux Consumption.
When you enable identity-based connections for AzureWebJobsStorage in Linux Consumption, you will need to deploy
via an external deployment package.
This configuration will let the system know that it should use an identity to connect to the resource.
4. Select OK and then Save > Continue to save your changes.
You've removed the storage connection string requirement for AzureWebJobsStorage by configuring your app
to instead connect to blobs using managed identities.
NOTE
The __accountName syntax is unique to the AzureWebJobsStorage connection and cannot be used for other storage
connections. To learn to define other connections, check the reference for each trigger and binding your app uses.
Next steps
This tutorial showed how to create a function app without storing secrets in its configuration.
In the next tutorial, you'll learn how to use identity in trigger and binding connections.
Use identity-based connections instead of secrets with triggers and bindings
Tutorial: Use identity-based connections instead of
secrets with triggers and bindings
8/2/2022 • 7 minutes to read • Edit Online
This tutorial shows you how to configure Azure Functions to connect to Azure Service Bus queues using
managed identities instead of secrets stored in the function app settings. The tutorial is a continuation of the
Create a function app without default storage secrets in its definition tutorial. To learn more about identity-based
connections, see Configure an identity-based connection..
While the procedures shown work generally for all languages, this tutorial currently supports C# class library
functions on Windows specifically.
In this tutorial, you'll learn how to:
Create a Service Bus namespace and queue.
Configure your function app with managed identity
Create a role assignment granting that identity permission to read from the Service Bus queue
Create and deploy a function app with a Service Bus trigger.
Verify your identity-based connection to Service Bus
Prerequisite
Complete the previous tutorial: Create a function app with identity-based connections.
NOTE
Role requirements for using identity-based connections vary depending on the service and how you are connecting to it.
Needs vary across triggers, input bindings, and output bindings. For more details on specific role requirements, please
refer to the trigger and binding documentation for the service.
1. In your service bus namespace that you just created, select Access Control (IAM) . This is where you can
view and configure who has access to the resource.
2. Click Add and select add role assignment .
3. Search for Azure Ser vice Bus Data Receiver , select it, and click Next .
4. On the Members tab, under Assign access to , choose Managed Identity
5. Click Select members to open the Select managed identities panel.
6. Confirm that the Subscription is the one in which you created the resources earlier.
7. In the Managed identity selector, choose Function App from the System-assigned managed
identity category. The label "Function App" may have a number in parentheses next to it, indicating the
number of apps in the subscription with system-assigned identities.
8. Your app should appear in a list below the input fields. If you don't see it, you can use the Select box to
filter the results with your app's name.
9. Click on your application. It should move down into the Selected members section. Click Select .
10. Back on the Add role assignment screen, click Review + assign . Review the configuration, and then
click Review + assign .
You've granted your function app access to the service bus namespace using managed identities.
4. After you create the two settings, select Save > Confirm .
NOTE
When using Azure App Configuration or Key Vault to provide settings for Managed Identity connections, setting names
should use a valid key separator such as : or / in place of the __ to ensure names are resolved correctly.
For example, ServiceBusConnection:fullyQualifiedNamespace .
Now that you've prepared the function app to connect to the service bus namespace using a managed identity,
you can add a new function that uses a Service Bus trigger to your local project.
cd LocalFunctionProj
This replaces the default version of the Service Bus extension package with a version that supports
managed identities.
4. Run the following command to add a Service Bus triggered function to the project:
This adds the code for a new Service Bus trigger and a reference to the extension package. You need to
add a service bus namespace connection setting for this trigger.
5. Open the new ServiceBusTrigger.cs project file and replace the ServiceBusTrigger class with the following
code:
This code sample updates the queue name to myinputqueue , which is the same name as you queue you
created earlier. It also sets the name of the Service Bus connection to ServiceBusConnection . This is the
Service Bus namespace used by the identity-based connection
ServiceBusConnection__fullyQualifiedNamespace you configured in the portal.
NOTE
If you try to run your functions now using func start you'll receive an error. This is because you don't have an identity-
based connection defined locally. If you want to run your function locally, set the app setting
ServiceBusConnection__fullyQualifiedNamespace in local.settings.json as you did in the previous section. In
addition, you'll need to assign the role to your developer identity. For more details, please refer to the local development
with identity-based connections documentation.
NOTE
When using Azure App Configuration or Key Vault to provide settings for Managed Identity connections, setting names
should use a valid key separator such as : or / in place of the __ to ensure names are resolved correctly.
For example, ServiceBusConnection:fullyQualifiedNamespace .
2. Browse to the \bin\Release\netcoreapp3.1\publish subfolder and create a .zip file from its contents.
3. Publish the .zip file by running the following command, replacing the FUNCTION_APP_NAME ,
RESOURCE_GROUP_NAME , and PATH_TO_ZIP parameters as appropriate:
Now that you have updated the function app with the new trigger, you can verify that it works using the identity.
Clean up resources
In the preceding steps, you created Azure resources in a resource group. If you don't expect to need these
resources in the future, you can delete them by deleting the resource group.
From the Azure portal menu or Home page, select Resource groups . Then, on the Resource groups page,
select myResourceGroup .
On the myResourceGroup page, make sure that the listed resources are the ones you want to delete.
Select Delete resource group , type myResourceGroup in the text box to confirm, and then select Delete .
Next steps
In this tutorial, you created a function app with identity-based connections.
Use the following links to learn more Azure Functions with identity-based connections:
Managed identity in Azure Functions
Identity-based connections in Azure Functions
Functions documentation for local development
Tutorial: Connect a function app to Azure SQL with
managed identity and SQL bindings
8/2/2022 • 3 minutes to read • Edit Online
Azure Functions provides a managed identity, which is a turn-key solution for securing access to Azure SQL
Database and other Azure services. Managed identities make your app more secure by eliminating secrets from
your app, such as credentials in the connection strings. In this tutorial, you'll add managed identity to an Azure
Function that utilizes Azure SQL bindings. A sample Azure Function project with SQL bindings is available in the
ToDo backend example.
When you're finished with this tutorial, your Azure Function will connect to Azure SQL database without the
need of username and password.
An overview of the steps you'll take:
Enable Azure AD authentication to the SQL database
Enable Azure Function managed identity
Grant SQL Database access to the managed identity
Configure Azure Function SQL connection string
3. Add this Azure AD user as an Active Directory admin using az sql server ad-admin create command in
the Cloud Shell. In the following command, replace <server-name> with the server name (without the
.database.windows.net suffix).
For more information on adding an Active Directory admin, see Provision an Azure Active Directory
administrator for your server
For information on enabling system-assigned managed identity through Azure CLI or PowerShell, check out
more information on using managed identities with Azure Functions.
2. In the SQL prompt for the database you want, run the following commands to grant permissions to your
function. For example,
<identity-name> is the name of the managed identity in Azure AD. If the identity is system-assigned, the
name is always the same as the name of your Function app.
testdb is the name of the database we're connecting to and demo.database.windows.net is the name of the
server we're connecting to.
Next steps
Read data from a database (Input binding)
Save data to a database (Output binding)
Review ToDo API sample with Azure SQL bindings
Tutorial Step 2: Automate resizing uploaded images
using Event Grid
8/2/2022 • 8 minutes to read • Edit Online
Azure Event Grid is an eventing service for the cloud. Event Grid enables you to create subscriptions to events
raised by Azure services or third-party resources.
This tutorial extends the Upload image data in the cloud with Azure Storage tutorial to add serverless automatic
thumbnail generation using Azure Event Grid and Azure Functions. Event Grid enables Azure Functions to
respond to Azure Blob storage events and generate thumbnails of uploaded images. An event subscription is
created against the Blob storage create event. When a blob is added to a specific Blob storage container, a
function endpoint is called. Data passed to the function binding from Event Grid is used to access the blob and
generate the thumbnail image.
You use the Azure CLI and the Azure portal to add the resizing functionality to an existing image upload app.
Prerequisites
NOTE
This article uses the Azure Az PowerShell module, which is the recommended PowerShell module for interacting with
Azure. To get started with the Az PowerShell module, see Install Azure PowerShell. To learn how to migrate to the Az
PowerShell module, see Migrate Azure PowerShell from AzureRM to Az.
PowerShell
Azure CLI
$resourceGroupName="myResourceGroup"
$location="eastus"
Now configure the function app to connect to the Blob storage account you created in the previous tutorial.
The FUNCTIONS_EXTENSION_VERSION=~2 setting makes the function app run on version 2.x of the Azure Functions
runtime.
You can now deploy a function code project to this function app.
The sample C# resize function is available on GitHub. Deploy this code project to the function app by using the
az functionapp deployment source config command.
The image resize function is triggered by HTTP requests sent to it from the Event Grid service. You tell Event Grid
that you want to get these notifications at your function's URL by creating an event subscription. For this tutorial,
you subscribe to blob-created events.
The data passed to the function from the Event Grid notification includes the URL of the blob. That URL is in turn
passed to the input binding to obtain the uploaded image from Blob storage. The function generates a
thumbnail image and writes the resulting stream to a separate container in Blob storage.
This project uses EventGridTrigger for the trigger type. Using the Event Grid trigger is recommended over
generic HTTP triggers. Event Grid automatically validates Event Grid Function triggers. With generic HTTP
triggers, you must implement the validation response.
To learn more about this function, see the function.json and run.csx files.
The function project code is deployed directly from the public sample repository. To learn more about
deployment options for Azure Functions, see Continuous deployment for Azure Functions.
2. Select select Integration then choose the Event Grid Trigger and select Create Event Grid
subscription .
Resource Your Blob storage account Choose the Blob storage account
you created.
System Topic Name imagestoragesystopic Specify a name for the system topic.
To learn about system topics, see
System topics overview.
SET T IN G SUGGEST ED VA L UE DESC RIP T IO N
Event types Blob created Uncheck all types other than Blob
created . Only event types of
Microsoft.Storage.BlobCreated
are passed to the function.
5. Select Create to add the event subscription to create an event subscription that triggers the Thumbnail
function when a blob is added to the images container. The function resizes the images and adds them to
the thumbnails container.
Now that the backend services are configured, you test the image resize functionality in the sample web app.
Click the Upload photos region to select and upload a file. You can also drag a photo to this region.
Notice that after the uploaded image disappears, a copy of the uploaded image is displayed in the Generated
Thumbnails carousel. This image was resized by the function, added to the thumbnails container, and
downloaded by the web client.
Next steps
In this tutorial, you learned how to:
Create a general Azure Storage account
Deploy serverless code using Azure Functions
Create a Blob storage event subscription in Event Grid
Advance to part three of the Storage tutorial series to learn how to secure access to the storage account.
Secure access to an applications data in the cloud
To learn more about Event Grid, see An introduction to Azure Event Grid.
To try another tutorial that features Azure Functions, see Create a function that integrates with Azure Logic
Apps.
Tutorial: Apply machine learning models in Azure
Functions with Python and TensorFlow
8/2/2022 • 8 minutes to read • Edit Online
In this article, you learn how to use Python, TensorFlow, and Azure Functions with a machine learning model to
classify an image based on its contents. Because you do all work locally and create no Azure resources in the
cloud, there is no cost to complete this tutorial.
Initialize a local environment for developing Azure Functions in Python.
Import a custom TensorFlow machine learning model into a function app.
Build a serverless HTTP API for classifying an image as containing a dog or a cat.
Consume the API from a web app.
Prerequisites
An Azure account with an active subscription. Create an account for free.
Python 3.7.4. (Python 3.7.4 and Python 3.6.x are verified with Azure Functions; Python 3.8 and later versions
are not yet supported.)
The Azure Functions Core Tools
A code editor such as Visual Studio Code
Prerequisite check
1. In a terminal or command window, run func --version to check that the Azure Functions Core Tools are
version 2.7.1846 or later.
2. Run python --version (Linux/MacOS) or py --version (Windows) to check your Python version reports
3.7.x.
cd functions-python-tensorflow-tutorial
cd start
source .venv/bin/activate
If Python didn't install the venv package on your Linux distribution, run the following command:
You run all subsequent commands in this activated virtual environment. (To exit the virtual environment, run
deactivate .)
After initialization, the start folder contains various files for the project, including configurations files
named local.settings.json and host.json. Because local.settings.json can contain secrets downloaded from
Azure, the file is excluded from source control by default in the .gitignore file.
TIP
Because a function project is tied to a specific runtime, all the functions in the project must be written with the
same language.
2. Add a function to your project by using the following command, where the --name argument is the
unique name of your function and the --template argument specifies the function's trigger. func new
create a subfolder matching the function name that contains a code file appropriate to the project's
chosen language and a configuration file named function.json.
This command creates a folder matching the name of the function, classify. In that folder are two files:
__init__.py, which contains the function code, and function.json, which describes the function's trigger and
its input and output bindings. For details on the contents of these files, see Examine the file contents in
the Python quickstart.
Run the function locally
1. Start the function by starting the local Azure Functions runtime host in the start folder:
func start
2. Once you see the classifyendpoint appear in the output, navigate to the URL,
http://localhost:7071/api/classify?name=Azure . The message "Hello Azure!" should appear in the output.
TIP
If you want to host your TensorFlow model independent of the function app, you can instead mount a file share
containing your model to your Linux function app. To learn more, see Mount a file share to a Python function app using
Azure CLI.
1. In the start folder, run following command to copy the model files into the classify folder. Be sure to
include \* in the command.
bash
PowerShell
Cmd
cp ../resources/model/* classify
2. Verify that the classify folder contains files named model.pb and labels.txt. If not, check that you ran the
command in the start folder.
3. In the start folder, run the following command to copy a file with helper code into the classify folder:
bash
PowerShell
Cmd
cp ../resources/predict.py classify
4. Verify that the classify folder now contains a file named predict.py.
5. Open start/requirements.txt in a text editor and add the following dependencies required by the helper
code:
tensorflow==1.14
Pillow
requests
6. Save requirements.txt.
7. Install the dependencies by running the following command in the start folder. Installation may take a few
minutes, during which time you can proceed with modifying the function in the next section.
On Windows, you may encounter the error, "Could not install packages due to an EnvironmentError:
[Errno 2] No such file or directory:" followed by a long pathname to a file like
sharded_mutable_dense_hashtable.cpython-37.pyc. Typically, this error happens because the depth of the
folder path becomes too long. In this case, set the registry key
HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\FileSystem@LongPathsEnabled to 1 to enable long
paths. Alternately, check where your Python interpreter is installed. If that location has a long path, try
reinstalling in a folder with a shorter path.
TIP
When calling upon predict.py to make its first prediction, a function named _initialize loads the TensorFlow model
from disk and caches it in global variables. This caching speeds up subsequent predictions. For more information on using
global variables, refer to the Azure Functions Python developer guide.
import logging
import azure.functions as func
import json
2. Replace the entire contents of the main function with the following code:
results = predict_image_from_url(image_url)
headers = {
"Content-type": "application/json",
"Access-Control-Allow-Origin": "*"
}
This function receives an image URL in a query string parameter named img . It then calls
predict_image_from_url from the helper library to download and classify the image using the TensorFlow
model. The function then returns an HTTP response with the results.
IMPORTANT
Because this HTTP endpoint is called by a web page hosted on another domain, the response includes an
Access-Control-Allow-Origin header to satisfy the browser's Cross-Origin Resource Sharing (CORS)
requirements.
In a production application, change * to the web page's specific origin for added security.
3. Save your changes, then assuming that dependencies have finished installing, start the local function host
again with func start . Be sure to run the host in the start folder with the virtual environment activated.
Otherwise the host will start, but you will see errors when invoking the function.
func start
4. In a browser, open the following URL to invoke the function with the URL of a cat image and confirm that
the returned JSON classifies the image as a cat.
http://localhost:7071/api/classify?img=https://raw.githubusercontent.com/Azure-Samples/functions-
python-tensorflow-tutorial/master/resources/assets/samples/cat1.png
5. Keep the host running because you use it in the next step.
Run the local web app front end to test the function
To test invoking the function endpoint from another web app, there's a simple app in the repository's frontend
folder.
1. Open a new terminal or command prompt and activate the virtual environment (as described earlier
under Create and activate a Python virtual environment).
2. Navigate to the repository's frontend folder.
3. Start an HTTP server with Python:
bash
PowerShell
Cmd
python -m http.server
4. In a browser, navigate to localhost:8000 , then enter one of the following photo URLs into the textbox, or
use the URL of any publicly accessible image.
https://raw.githubusercontent.com/Azure-Samples/functions-python-tensorflow-
tutorial/master/resources/assets/samples/cat1.png
https://raw.githubusercontent.com/Azure-Samples/functions-python-tensorflow-
tutorial/master/resources/assets/samples/cat2.png
https://raw.githubusercontent.com/Azure-Samples/functions-python-tensorflow-
tutorial/master/resources/assets/samples/dog1.png
https://raw.githubusercontent.com/Azure-Samples/functions-python-tensorflow-
tutorial/master/resources/assets/samples/dog2.png
5. Select Submit to invoke the function endpoint to classify the image.
If the browser reports an error when you submit the image URL, check the terminal in which you're
running the function app. If you see an error like "No module found 'PIL'", you may have started the
function app in the start folder without first activating the virtual environment you created earlier. If you
still see errors, run pip install -r requirements.txt again with the virtual environment activated and
look for errors.
NOTE
The model always classifies the content of the image as a cat or a dog, regardless of whether the image contains either,
defaulting to dog. Images of tigers and panthers, for example, typically classify as cat, but images of elephants, carrots, or
airplanes classify as dog.
Clean up resources
Because the entirety of this tutorial runs locally on your machine, there are no Azure resources or services to
clean up.
Next steps
In this tutorial, you learned how to build and customize an HTTP API endpoint with Azure Functions to classify
images using a TensorFlow model. You also learned how to call the API from a web app. You can use the
techniques in this tutorial to build out APIs of any complexity, all while running on the serverless compute
model provided by Azure Functions.
Deploy the function to Azure Functions using the Azure CLI Guide
See also:
Deploy the function to Azure using Visual Studio Code.
Azure Functions Python Developer Guide
Mount a file share to a Python function app using Azure CLI
Tutorial: Deploy a pre-trained image classification
model to Azure Functions with PyTorch
8/2/2022 • 7 minutes to read • Edit Online
In this article, you learn how to use Python, PyTorch, and Azure Functions to load a pre-trained model for
classifying an image based on its contents. Because you do all work locally and create no Azure resources in the
cloud, there is no cost to complete this tutorial.
Initialize a local environment for developing Azure Functions in Python.
Import a pre-trained PyTorch machine learning model into a function app.
Build a serverless HTTP API for classifying an image as one of 1000 ImageNet classes.
Consume the API from a web app.
Prerequisites
An Azure account with an active subscription. Create an account for free.
Python 3.7.4 or above. (Python 3.8.x and Python 3.6.x are also verified with Azure Functions.)
The Azure Functions Core Tools
A code editor such as Visual Studio Code
Prerequisite check
1. In a terminal or command window, run func --version to check that the Azure Functions Core Tools are
version 2.7.1846 or later.
2. Run python --version (Linux/MacOS) or py --version (Windows) to check your Python version reports
3.7.x.
cd functions-python-pytorch-tutorial
bash
PowerShell
Cmd
cd start
python -m venv .venv
source .venv/bin/activate
If Python didn't install the venv package on your Linux distribution, run the following command:
You run all subsequent commands in this activated virtual environment. (To exit the virtual environment, run
deactivate .)
After initialization, the start folder contains various files for the project, including configurations files
named local.settings.json and host.json. Because local.settings.json can contain secrets downloaded from
Azure, the file is excluded from source control by default in the .gitignore file.
TIP
Because a function project is tied to a specific runtime, all the functions in the project must be written with the
same language.
2. Add a function to your project by using the following command, where the --name argument is the
unique name of your function and the --template argument specifies the function's trigger. func new
create a subfolder matching the function name that contains a code file appropriate to the project's
chosen language and a configuration file named function.json.
This command creates a folder matching the name of the function, classify. In that folder are two files:
__init__.py, which contains the function code, and function.json, which describes the function's trigger and
its input and output bindings. For details on the contents of these files, see Examine the file contents in
the Python quickstart.
2. Once you see the endpoint appear in the output, navigate to the URL,
classify
http://localhost:7071/api/classify?name=Azure . The message "Hello Azure!" should appear in the output.
cp ../resources/predict.py classify
cp ../resources/labels.txt classify
2. Verify that the classify folder contains files named predict.py and labels.txt. If not, check that you ran the
command in the start folder.
3. Open start/requirements.txt in a text editor and add the dependencies required by the helper code, which
should look like the following:
azure-functions
requests
-f https://download.pytorch.org/whl/torch_stable.html
torch==1.5.0+cpu
torchvision==0.6.0+cpu
4. Save requirements.txt, then run the following command from the start folder to install the dependencies.
Installation may take a few minutes, during which time you can proceed with modifying the function in the next
section.
TIP
On Windows, you may encounter the error, "Could not install packages due to an EnvironmentError: [Errno 2] No
such file or directory:" followed by a long pathname to a file like sharded_mutable_dense_hashtable.cpython-37.pyc.
Typically, this error happens because the depth of the folder path becomes too long. In this case, set the registry key
HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\FileSystem@LongPathsEnabled to 1 to enable long
paths. Alternately, check where your Python interpreter is installed. If that location has a long path, try reinstalling in a
folder with a shorter path.
Update the function to run predictions
1. Open classify/__init__.py in a text editor and add the following lines after the existing import statements
to import the standard JSON library and the predict helpers:
import logging
import azure.functions as func
import json
2. Replace the entire contents of the main function with the following code:
results = predict_image_from_url(image_url)
headers = {
"Content-type": "application/json",
"Access-Control-Allow-Origin": "*"
}
This function receives an image URL in a query string parameter named img . It then calls
predict_image_from_url from the helper library to download and classify the image using the PyTorch
model. The function then returns an HTTP response with the results.
IMPORTANT
Because this HTTP endpoint is called by a web page hosted on another domain, the response includes an
Access-Control-Allow-Origin header to satisfy the browser's Cross-Origin Resource Sharing (CORS)
requirements.
In a production application, change * to the web page's specific origin for added security.
3. Save your changes, then assuming that dependencies have finished installing, start the local function host
again with func start . Be sure to run the host in the start folder with the virtual environment activated.
Otherwise the host will start, but you will see errors when invoking the function.
func start
4. In a browser, open the following URL to invoke the function with the URL of a Bernese Mountain Dog
image and confirm that the returned JSON classifies the image as a Bernese Mountain Dog.
http://localhost:7071/api/classify?img=https://raw.githubusercontent.com/Azure-Samples/functions-
python-pytorch-tutorial/master/resources/assets/Bernese-Mountain-Dog-Temperament-long.jpg
5. Keep the host running because you use it in the next step.
Run the local web app front end to test the function
To test invoking the function endpoint from another web app, there's a simple app in the repository's frontend
folder.
1. Open a new terminal or command prompt and activate the virtual environment (as described earlier
under Create and activate a Python virtual environment).
2. Navigate to the repository's frontend folder.
3. Start an HTTP server with Python:
bash
PowerShell
Cmd
python -m http.server
4. In a browser, navigate to localhost:8000 , then enter one of the following photo URLs into the textbox, or
use the URL of any publicly accessible image.
https://raw.githubusercontent.com/Azure-Samples/functions-python-pytorch-
tutorial/master/resources/assets/Bernese-Mountain-Dog-Temperament-long.jpg
https://github.com/Azure-Samples/functions-python-pytorch-
tutorial/blob/master/resources/assets/bald-eagle.jpg?raw=true
https://raw.githubusercontent.com/Azure-Samples/functions-python-pytorch-
tutorial/master/resources/assets/penguin.jpg
5. Select Submit to invoke the function endpoint to classify the image.
If the browser reports an error when you submit the image URL, check the terminal in which you're
running the function app. If you see an error like "No module found 'PIL'", you may have started the
function app in the start folder without first activating the virtual environment you created earlier. If you
still see errors, run pip install -r requirements.txt again with the virtual environment activated and
look for errors.
Clean up resources
Because the entirety of this tutorial runs locally on your machine, there are no Azure resources or services to
clean up.
Next steps
In this tutorial, you learned how to build and customize an HTTP API endpoint with Azure Functions to classify
images using a PyTorch model. You also learned how to call the API from a web app. You can use the techniques
in this tutorial to build out APIs of any complexity, all while running on the serverless compute model provided
by Azure Functions.
See also:
Deploy the function to Azure using Visual Studio Code.
Azure Functions Python Developer Guide
Deploy the function to Azure Functions using the Azure CLI Guide
Create a function on Linux using a custom container
8/2/2022 • 30 minutes to read • Edit Online
In this tutorial, you create and deploy your code to Azure Functions as a custom Docker container using a Linux
base image. You typically use a custom image when your functions require a specific language version or have a
specific dependency or configuration that isn't provided by the built-in image.
Azure Functions supports any language or runtime using custom handlers. For some languages, such as the R
programming language used in this tutorial, you need to install the runtime or more libraries as dependencies
that require the use of a custom container.
Deploying your function code in a custom Linux container requires Premium plan or a Dedicated (App Service)
plan hosting. Completing this tutorial incurs costs of a few US dollars in your Azure account, which you can
minimize by cleaning-up resources when you're done.
You can also use a default Azure App Service container as described in Create your first function hosted on
Linux. Supported base images for Azure Functions are found in the Azure Functions base images repo.
In this tutorial, you learn how to:
Create a function app and Dockerfile using the Azure Functions Core Tools.
Build a custom image using Docker.
Publish a custom image to a container registry.
Create supporting resources in Azure for the function app.
Deploy a function app from Docker Hub.
Add application settings to the function app.
Enable continuous deployment.
Enable SSH connections to the container.
Add a Queue storage output binding.
Create a function app and Dockerfile using the Azure Functions Core Tools.
Build a custom image using Docker.
Publish a custom image to a container registry.
Create supporting resources in Azure for the function app.
Deploy a function app from Docker Hub.
Add application settings to the function app.
Enable continuous deployment.
Enable SSH connections to the container.
You can follow this tutorial on any computer running Windows, macOS, or Linux.
bash
PowerShell
Cmd
source .venv/bin/activate
If Python didn't install the venv package on your Linux distribution, run the following command:
In-process
In-process
Isolated process
In an empty folder, run the following command to generate the Functions project from a Maven archetype:
Bash
PowerShell
Cmd
The -DjavaVersion parameter tells the Functions runtime which version of Java to use. Use -DjavaVersion=11 if
you want your functions to run on Java 11. When you don't specify -DjavaVersion , Maven defaults to Java 8.
For more information, see Java versions.
IMPORTANT
The JAVA_HOME environment variable must be set to the install location of the correct version of the JDK to complete
this article.
Maven asks you for values needed to finish generating the project on deployment. Follow the prompts and
provide the following information:
P RO M P T VA L UE DESC RIP T IO N
The --docker option generates a Dockerfile for the project, which defines a suitable custom container for use
with Azure Functions and the selected runtime.
Navigate into the project folder:
cd fabrikam-functions
In-process
Isolated process
Use the following command to add a function to your project, where the --name argument is the unique name
of your function and the --template argument specifies the function's trigger. func new creates a subfolder
matching the function name that contains a configuration file named function.json.
In a text editor, create a file in the project folder named handler.R . Add the following code as its content:
library(httpuv)
return(method_handler(request))
}
return(response)
}
)
In host.json, modify the customHandler section to configure the custom handler's startup command.
"customHandler": {
"description": {
"defaultExecutablePath": "Rscript",
"arguments": [
"handler.R"
]
},
"enableForwardingHttpRequest": true
}
To test the function locally, start the local Azure Functions runtime host in the root of the project folder.
func start
func start
npm install
npm start
R -e "install.packages('httpuv', repos='http://cran.rstudio.com/')"
func start
Modify the Dockerfile to install R. Replace the contents of the Dockerfile with the following code:
FROM mcr.microsoft.com/azure-functions/dotnet:3.0-appservice
ENV AzureWebJobsScriptRoot=/home/site/wwwroot \
AzureFunctionsJobHost__Logging__Console__IsEnabled=true
COPY . /home/site/wwwroot
In the root project folder, run the docker build command, provide a name as azurefunctionsimage , and tag as
v1.0.0 . Replace <DOCKER_ID> with your Docker Hub account ID. This command builds the Docker image for the
container.
When the command completes, you can run the new container locally.
To test the build, run the image in a local container using the docker run command, replace <docker_id> again
with your Docker Hub account ID, and add the ports argument as -p 8080:80 :
In-process
Isolated process
docker login
2. After you've signed in, push the image to Docker Hub by using the docker push command, again replace
the <docker_id> with your Docker Hub account ID.
3. Depending on your network speed, pushing the image for the first time might take a few minutes
(pushing subsequent changes is much faster). While you're waiting, you can proceed to the next section
and create Azure resources in another terminal.
Azure CLI
Azure PowerShell
az login
Azure CLI
Azure PowerShell
The az group create command creates a resource group. In the above command, replace <REGION> with a
region near you, using an available region code returned from the az account list-locations command.
3. Create a general-purpose storage account in your resource group and region.
Azure CLI
Azure PowerShell
4. Use the command to create a Premium plan for Azure Functions named myPremiumPlan in the Elastic
Premium 1 pricing tier ( --sku EP1 ), in your <REGION> , and in a Linux container ( --is-linux ).
Azure CLI
Azure PowerShell
We use the Premium plan here, which can scale as needed. For more information about hosting, see
Azure Functions hosting plans comparison. For more information on how to calculate costs, see the
Functions pricing page.
The command also provisions an associated Azure Application Insights instance in the same resource
group, with which you can monitor your function app and view logs. For more information, see Monitor
Azure Functions. The instance incurs no costs until you activate it.
Azure CLI
Azure PowerShell
NOTE
If you're using a custom container registry, then the deployment-container-image-name parameter will refer to
the registry URL.
In this example, replace <STORAGE_NAME> with the name you used in the previous section for the storage
account. Also, replace <APP_NAME> with a globally unique name appropriate to you, and <DOCKER_ID> with
your Docker Hub account ID. When you're deploying from a custom container registry, use the
deployment-container-image-name parameter to indicate the URL of the registry.
TIP
You can use the DisableColor setting in the host.json file to prevent ANSI control characters from being written
to the container logs.
2. Use the following command to get the connection string for the storage account you created:
Azure CLI
Azure PowerShell
The connection string for the storage account is returned by using the az storage account show-
connection-string command.
Replace <STORAGE_NAME> with the name of the storage account you created earlier.
3. Use the following command to add the setting to the function app:
Azure CLI
Azure PowerShell
NOTE
If you publish your custom image to a private container registry, you must use environment variables in the Dockerfile for
the connection string instead. For more information, see the ENV instruction. You must also set the
DOCKER_REGISTRY_SERVER_USERNAME and DOCKER_REGISTRY_SERVER_PASSWORD variables. To use the values, you must
rebuild the image, push the image to the registry, and then restart the function app on Azure.
In-process
Isolated process
https://<APP_NAME>.azurewebsites.net/api/HttpExample?name=Functions
Replace <APP_NAME> with the name of your function app. When you navigate to this URL, the browser must
display similar output as when you ran the function locally.
az functionapp deployment container config --enable-cd --query CI_CD_URL --output tsv --name
<APP_NAME> --resource-group AzureFunctionsContainers-rg
The az functionapp deployment container config command enables continuous deployment and returns
the deployment webhook URL. You can retrieve this URL at any later time by using the az functionapp
deployment container show-cd-url command.
As before, replace <APP_NAME> with your function app name.
2. Copy the deployment webhook URL to the clipboard.
3. Open Docker Hub, sign in, and select Repositories on the navigation bar. Locate and select the image,
select the Webhooks tab, specify a Webhook name , paste your URL in Webhook URL , and then select
Create .
4. With the webhook set, Azure Functions redeploys your image whenever you update it in Docker Hub.
FROM mcr.microsoft.com/azure-functions/dotnet:3.0-appservice
FROM mcr.microsoft.com/azure-functions/node:2.0-appservice
FROM mcr.microsoft.com/azure-functions/powershell:2.0-appservice
FROM mcr.microsoft.com/azure-functions/python:2.0-python3.7-appservice
FROM mcr.microsoft.com/azure-functions/node:2.0-appservice
2. Rebuild the image by using the docker build command again, replace the <docker_id> with your
Docker Hub account ID.
docker build --tag <docker_id>/azurefunctionsimage:v1.0.0 .
3. Push the updated image to Docker Hub, which should take considerably less time than the first push.
Only the updated segments of the image need to be uploaded now.
4. Azure Functions automatically redeploys the image to your functions app; the process takes place in less
than a minute.
5. In a browser, open https://<app_name>.scm.azurewebsites.net/ and replace <app_name> with your unique
name. This URL is the Advanced Tools (Kudu) endpoint for your function app container.
6. Sign in to your Azure account, and then select the SSH to establish a connection with the container.
Connecting might take a few moments if Azure is still updating the container image.
7. After a connection is established with your container, run the top command to view the currently
running processes.
2. Open local.settings.json file and locate the value named AzureWebJobsStorage , which is the Storage
account connection string. You use the name AzureWebJobsStorage and the connection string in other
sections of this article.
IMPORTANT
Because the local.settings.json file contains secrets downloaded from Azure, always exclude this file from source control.
The .gitignore file created with a local functions project excludes the file by default.
In-process
Isolated process
Now, you can add the storage output binding to your project.
"bindings": [
{
"authLevel": "function",
"type": "httpTrigger",
"direction": "in",
"name": "req",
"methods": [
"get",
"post"
]
},
{
"type": "http",
"direction": "out",
"name": "res"
}
]
"scriptFile": "__init__.py",
"bindings": [
{
"authLevel": "function",
"type": "httpTrigger",
"direction": "in",
"name": "req",
"methods": [
"get",
"post"
]
},
{
"type": "http",
"direction": "out",
"name": "$return"
}
"bindings": [
{
"authLevel": "function",
"type": "httpTrigger",
"direction": "in",
"name": "Request",
"methods": [
"get",
"post"
]
},
{
"type": "http",
"direction": "out",
"name": "Response"
}
]
Each binding has at least a type, a direction, and a name. In the above example, the first binding is of type
httpTrigger with the direction in . For the in direction, name specifies the name of an input parameter that's
sent to the function when invoked by the trigger.
The second binding in the collection is named res . This http binding is an output binding ( out ) that is used
to write the HTTP response.
To write to an Azure Storage queue from this function, add an out binding of type queue with the name msg ,
as shown in the code below:
{
"authLevel": "function",
"type": "httpTrigger",
"direction": "in",
"name": "req",
"methods": [
"get",
"post"
]
},
{
"type": "http",
"direction": "out",
"name": "res"
},
{
"type": "queue",
"direction": "out",
"name": "msg",
"queueName": "outqueue",
"connection": "AzureWebJobsStorage"
}
]
}
The second binding in the collection is of type http with the direction out , in which case the special name of
$return indicates that this binding uses the function's return value rather than providing an input parameter.
To write to an Azure Storage queue from this function, add an out binding of type queue with the name msg ,
as shown in the code below:
"bindings": [
{
"authLevel": "anonymous",
"type": "httpTrigger",
"direction": "in",
"name": "req",
"methods": [
"get",
"post"
]
},
{
"type": "http",
"direction": "out",
"name": "$return"
},
{
"type": "queue",
"direction": "out",
"name": "msg",
"queueName": "outqueue",
"connection": "AzureWebJobsStorage"
}
]
The second binding in the collection is named res . This http binding is an output binding ( out ) that is used
to write the HTTP response.
To write to an Azure Storage queue from this function, add an out binding of type queue with the name msg ,
as shown in the code below:
{
"authLevel": "function",
"type": "httpTrigger",
"direction": "in",
"name": "Request",
"methods": [
"get",
"post"
]
},
{
"type": "http",
"direction": "out",
"name": "Response"
},
{
"type": "queue",
"direction": "out",
"name": "msg",
"queueName": "outqueue",
"connection": "AzureWebJobsStorage"
}
]
}
In this case, msg is given to the function as an output argument. For a queue type, you must also specify the
name of the queue in queueName and provide the name of the Azure Storage connection (from local.settings.json
file) in connection .
In a C# project, the bindings are defined as binding attributes on the function method. Specific definitions
depend on whether your app runs in-process (C# class library) or in an isolated process.
In-process
Isolated process
Open the HttpExample.cs project file and add the following parameter to the Run method definition:
The msg parameter is an ICollector<T> type, representing a collection of messages written to an output
binding when the function completes. In this case, the output is a storage queue named outqueue . The
StorageAccountAttribute sets the connection string for the storage account. This attribute indicates the setting
that contains the storage account connection string and can be applied at the class, method, or parameter level.
In this case, you could omit StorageAccountAttribute because you're already using the default storage account.
The Run method definition must now look like the following code:
[FunctionName("HttpExample")]
public static async Task<IActionResult> Run(
[HttpTrigger(AuthorizationLevel.Anonymous, "get", "post", Route = null)] HttpRequest req,
[Queue("outqueue"),StorageAccount("AzureWebJobsStorage")] ICollector<string> msg,
ILogger log)
In a Java project, the bindings are defined as binding annotations on the function method. The function.json file
is then autogenerated based on these annotations.
Browse to the location of your function code under src/main/java, open the Function.java project file, and add
the following parameter to the run method definition:
@QueueOutput(name = "msg", queueName = "outqueue", connection = "AzureWebJobsStorage") OutputBinding<String>
msg
The msg parameter is an OutputBinding<T> type, which represents a collection of strings. These strings are
written as messages to an output binding when the function completes. In this case, the output is a storage
queue named outqueue . The connection string for the Storage account is set by the connection method. You
pass the application setting that contains the Storage account connection string, rather than passing the
connection string itself.
The run method definition must now look like the following example:
@FunctionName("HttpTrigger-Java")
public HttpResponseMessage run(
@HttpTrigger(name = "req", methods = {HttpMethod.GET, HttpMethod.POST}, authLevel =
AuthorizationLevel.FUNCTION)
HttpRequestMessage<Optional<String>> request,
@QueueOutput(name = "msg", queueName = "outqueue", connection = "AzureWebJobsStorage")
OutputBinding<String> msg, final ExecutionContext context) {
...
}
import logging
name = req.params.get('name')
if not name:
try:
req_body = req.get_json()
except ValueError:
pass
else:
name = req_body.get('name')
if name:
msg.set(name)
return func.HttpResponse(f"Hello {name}!")
else:
return func.HttpResponse(
"Please pass a name on the query string or in the request body",
status_code=400
)
The msg parameter is an instance of the azure.functions.Out class . The set method writes a string message
to the queue. In this case, it's the name passed to the function in the URL query string.
Add code that uses the msg output binding object on context.bindings to create a queue message. Add this
code before the context.res statement.
// Add a message to the Storage queue,
// which is the name passed to the function.
context.bindings.msg = (req.query.name || req.body.name);
Add code that uses the msg output binding object on context.bindings to create a queue message. Add this
code before the context.res statement.
context.bindings.msg = name;
const httpTrigger: AzureFunction = async function (context: Context, req: HttpRequest): Promise<void> {
context.log('HTTP trigger function processed a request.');
const name = (req.query.name || (req.body && req.body.name));
if (name) {
// Add a message to the storage queue,
// which is the name passed to the function.
context.bindings.msg = name;
// Send a "hello" response.
context.res = {
// status: 200, /* Defaults to 200 */
body: "Hello " + (req.query.name || req.body.name)
};
}
else {
context.res = {
status: 400,
body: "Please pass a name on the query string or in the request body"
};
}
};
Add code that uses the Push-OutputBinding cmdlet to write text to the queue using the msg output binding. Add
this code before you set the OK status in the if statement.
$outputMsg = $name
Push-OutputBinding -name msg -Value $outputMsg
if ($name) {
# Write the $name value to the queue,
# which is the name passed to the function.
$outputMsg = $name
Push-OutputBinding -name msg -Value $outputMsg
$status = [HttpStatusCode]::OK
$body = "Hello $name"
}
else {
$status = [HttpStatusCode]::BadRequest
$body = "Please pass a name on the query string or in the request body."
}
In-process
Isolated process
Add code that uses the msg output binding object to create a queue message. Add this code before the method
returns.
if (!string.IsNullOrEmpty(name))
{
// Add a message to the output collection.
msg.Add(name);
}
if (!string.IsNullOrEmpty(name))
{
// Add a message to the output collection.
msg.Add(name);
}
return name != null
? (ActionResult)new OkObjectResult($"Hello, {name}")
: new BadRequestObjectResult("Please pass a name on the query string or in the request body");
}
Now, you can use the new msg parameter to write to the output binding from your function code. Add the
following line of code before the success response to add the value of name to the msg output binding.
msg.setValue(name);
When you use an output binding, you don't have to use the Azure Storage SDK code for authentication, getting a
queue reference, or writing data. The Functions runtime and queue output binding do those tasks for you.
Your run method must now look like the following example:
if (name == null) {
return request.createResponseBuilder(HttpStatus.BAD_REQUEST)
.body("Please pass a name on the query string or in the request body").build();
} else {
// Write the name to the message queue.
msg.setValue(name);
@SuppressWarnings("unchecked")
final OutputBinding<String> msg = (OutputBinding<String>)mock(OutputBinding.class);
final HttpResponseMessage ret = new Function().run(req, msg, context);
2. Push the updated image back to the repository with docker push .
3. Because you configured continuous delivery, updating the image in the registry again automatically
updates your function app in Azure.
bash
PowerShell
Azure CLI
export AZURE_STORAGE_CONNECTION_STRING="<MY_CONNECTION_STRING>"
2. (Optional) Use the az storage queue list command to view the Storage queues in your account. The
output from this command must include a queue named outqueue , which was created when the function
wrote its first message to that queue.
3. Use the az storage message get command to read the message from this queue, which should be the
value you supplied when testing the function earlier. The command reads and removes the first message
from the queue.
bash
PowerShell
Azure CLI
echo `echo $(az storage message get --queue-name outqueue -o tsv --query '[].{Message:content}') |
base64 --decode`
Because the message body is stored base64 encoded, the message must be decoded before it's displayed.
After you execute az storage message get , the message is removed from the queue. If there was only one
message in outqueue , you won't retrieve a message when you run this command a second time and
instead get an error.
Clean up resources
If you want to continue working with Azure Function using the resources you created in this tutorial, you can
leave all those resources in place. Because you created a Premium Plan for Azure Functions, you'll incur one or
two USD per day in ongoing costs.
To avoid ongoing costs, delete the AzureFunctionsContainers-rg resource group to clean up all the resources in
that group:
Next steps
Monitoring functions
Scale and hosting options
Kubernetes-based serverless hosting
Tutorial: Deploy Azure Functions as IoT Edge
modules
8/2/2022 • 9 minutes to read • Edit Online
Applies to: IoT Edge 1.1 IoT Edge 1.2 IoT Edge 1.3
You can use Azure Functions to deploy code that implements your business logic directly to your Azure IoT Edge
devices. This tutorial walks you through creating and deploying an Azure Function that filters sensor data on the
simulated IoT Edge device. You use the simulated IoT Edge device that you created in the quickstarts. In this
tutorial, you learn how to:
Use Visual Studio Code to create an Azure Function.
Use VS Code and Docker to create a Docker image and publish it to a container registry.
Deploy the module from the container registry to your IoT Edge device.
View filtered data.
The Azure Function that you create in this tutorial filters the temperature data that's generated by your device.
The Function only sends messages upstream to Azure IoT Hub when the temperature is above a specified
threshold.
If you don't have an Azure subscription, create an Azure free account before you begin.
Prerequisites
Before beginning this tutorial, you should have gone through the previous tutorial to set up your development
environment for Linux container development: Develop IoT Edge modules using Linux containers. By completing
that tutorial, you should have the following prerequisites in place:
A free or standard-tier IoT Hub in Azure.
An AMD64 device running Azure IoT Edge with Linux containers. You can use the quickstarts to set up a Linux
device or Windows device.
A container registry, like Azure Container Registry.
Visual Studio Code configured with the Azure IoT Tools.
Docker CE configured to run Linux containers.
To develop an IoT Edge module in with Azure Functions, install the following additional prerequisites on your
development machine:
C# for Visual Studio Code (powered by OmniSharp) extension.
The .NET Core 2.1 SDK.
F IEL D VA L UE
Provide a solution name Enter a descriptive name for your solution, like
FunctionSolution , or accept the default.
Provide Docker image repository for the module An image repository includes the name of your container
registry and the name of your container image. Your
container image is prepopulated from the last step.
Replace localhost:5000 with the Login ser ver value
from your Azure container registry. You can retrieve the
Login server from the Overview page of your container
registry in the Azure portal. The final string looks like
<registry name>.azurecr.io/CSharpFunction.
NOTE
This tutorial uses admin login credentials for Azure Container Registry, which are convenient for development and test
scenarios. When you're ready for production scenarios, we recommend a least-privilege authentication option like service
principals. For more information, see Manage access to your container registry.
using System;
using System.Collections.Generic;
using System.IO;
using System.Text;
using System.Threading.Tasks;
using Microsoft.Azure.Devices.Client;
using Microsoft.Azure.WebJobs;
using Microsoft.Azure.WebJobs.Extensions.EdgeHub;
using Microsoft.Azure.WebJobs.Host;
using Microsoft.Extensions.Logging;
using Newtonsoft.Json;
namespace Functions.Samples
{
public static class CSharpFunction
{
[FunctionName("CSharpFunction")]
public static async Task FilterMessageAndSendMessage(
[EdgeHubTrigger("input1")] Message messageReceived,
[EdgeHub(OutputName = "output1")] IAsyncCollector<Message> output,
ILogger logger)
{
const int temperatureThreshold = 20;
byte[] messageBytes = messageReceived.GetBytes();
var messageString = System.Text.Encoding.UTF8.GetString(messageBytes);
if (!string.IsNullOrEmpty(messageString))
{
logger.LogInformation("Info: Received one non-empty message");
// Get the body of the message and deserialize it.
var messageBody = JsonConvert.DeserializeObject<MessageBody>(messageString);
var messageBody = JsonConvert.DeserializeObject<MessageBody>(messageString);
You may receive a security warning recommending the use of --password-stdin . While that best practice
is recommended for production scenarios, it's outside the scope of this tutorial. For more information, see
the docker login reference.
3. In the VS Code explorer, right-click the deployment.template.json file and select Build and Push IoT
Edge Solution .
The build and push command starts three operations. First, it creates a new folder in the solution called
config that holds the full deployment manifest, which is built out of information in the deployment
template and other solution files. Second, it runs docker build to build the container image based on the
appropriate dockerfile for your target architecture. Then, it runs docker push to push the image
repository to your container registry.
This process may take several minutes the first time, but is faster the next time that you run the
commands.
Clean up resources
If you plan to continue to the next recommended article, you can keep the resources and configurations that you
created and reuse them. You can also keep using the same IoT Edge device as a test device.
Otherwise, you can delete the local configurations and the Azure resources that you created in this article to
avoid charges.
Delete Azure resources
Deleting Azure resources and resource groups is irreversible. Make sure that you don't accidentally delete the
wrong resource group or resources. If you created the IoT hub inside an existing resource group that has
resources that you want to keep, delete only the IoT hub resource itself, not the resource group.
To delete the resources:
1. Sign in to the Azure portal, and then select Resource groups .
2. Select the name of the resource group that contains your IoT Edge test resources.
3. Review the list of resources that are contained in your resource group. If you want to delete all of them,
you can select Delete resource group . If you want to delete only some of them, you can click into each
resource to delete them individually.
Next steps
In this tutorial, you created an Azure Function module with code to filter raw data that's generated by your IoT
Edge device.
Continue on to the next tutorials to learn other ways that Azure IoT Edge can help you turn data into business
insights at the edge.
Find averages by using a floating window in Azure Stream Analytics
Tutorial: Create a function in Java with an Event Hub
trigger and an Azure Cosmos DB output binding
8/2/2022 • 12 minutes to read • Edit Online
This tutorial shows you how to use Azure Functions to create a Java function that analyzes a continuous stream
of temperature and pressure data. Event hub events that represent sensor readings trigger the function. The
function processes the event data, then adds status entries to an Azure Cosmos DB.
In this tutorial, you'll:
Create and configure Azure resources using the Azure CLI.
Create and test Java functions that interact with these resources.
Deploy your functions to Azure and monitor them with Application Insights.
If you don't have an Azure subscription, create an Azure free account before you begin.
Prerequisites
To complete this tutorial, you must have the following installed:
Java Developer Kit, version 8
Apache Maven, version 3.0 or above
Azure Functions Core Tools version 2.6.666 or above
Use the Bash environment in Azure Cloud Shell. For more information, see Azure Cloud Shell Quickstart -
Bash.
If you prefer to run CLI reference commands locally, install the Azure CLI. If you're running on Windows
or macOS, consider running Azure CLI in a Docker container. For more information, see How to run the
Azure CLI in a Docker container.
If you're using a local installation, sign in to the Azure CLI by using the az login command. To finish
the authentication process, follow the steps displayed in your terminal. For other sign-in options,
see Sign in with the Azure CLI.
When you're prompted, install the Azure CLI extension on first use. For more information about
extensions, see Use extensions with the Azure CLI.
Run az version to find the version and dependent libraries that are installed. To upgrade to the
latest version, run az upgrade.
IMPORTANT
The JAVA_HOME environment variable must be set to the install location of the JDK to complete this tutorial.
If you prefer to use the code for this tutorial directly, see the java-functions-eventhub-cosmosdb sample repo.
Bash
Cmd
RESOURCE_GROUP=<value>
EVENT_HUB_NAMESPACE=<value>
EVENT_HUB_NAME=<value>
EVENT_HUB_AUTHORIZATION_RULE=<value>
COSMOS_DB_ACCOUNT=<value>
STORAGE_ACCOUNT=<value>
FUNCTION_APP=<value>
LOCATION=<value>
The rest of this tutorial uses these variables. Be aware that these variables persist only for the duration of your
current Azure CLI or Cloud Shell session. You will need to run these commands again if you use a different local
terminal window or your Cloud Shell session times out.
Create a resource group
Azure uses resource groups to collect all related resources in your account. That way, you can view them as a
unit and delete them with a single command when you're done with them.
Use the following command to create a resource group:
Bash
Cmd
az group create \
--name $RESOURCE_GROUP \
--location $LOCATION
Bash
Cmd
az eventhubs namespace create \
--resource-group $RESOURCE_GROUP \
--name $EVENT_HUB_NAMESPACE
az eventhubs eventhub create \
--resource-group $RESOURCE_GROUP \
--name $EVENT_HUB_NAME \
--namespace-name $EVENT_HUB_NAMESPACE \
--message-retention 1
az eventhubs eventhub authorization-rule create \
--resource-group $RESOURCE_GROUP \
--name $EVENT_HUB_AUTHORIZATION_RULE \
--eventhub-name $EVENT_HUB_NAME \
--namespace-name $EVENT_HUB_NAMESPACE \
--rights Listen Send
The Event Hubs namespace contains the actual event hub and its authorization rule. The authorization rule
enables your functions to send messages to the hub and listen for the corresponding events. One function sends
messages that represent telemetry data. Another function listens for events, analyzes the event data, and stores
the results in Azure Cosmos DB.
Create an Azure Cosmos DB
Next, create an Azure Cosmos DB account, database, and collection using the following commands:
Bash
Cmd
az cosmosdb create \
--resource-group $RESOURCE_GROUP \
--name $COSMOS_DB_ACCOUNT
az cosmosdb sql database create \
--resource-group $RESOURCE_GROUP \
--account-name $COSMOS_DB_ACCOUNT \
--name TelemetryDb
az cosmosdb sql container create \
--resource-group $RESOURCE_GROUP \
--account-name $COSMOS_DB_ACCOUNT \
--database-name TelemetryDb \
--name TelemetryInfo \
--partition-key-path '/temperatureStatus'
The partition-key-path value partitions your data based on the temperatureStatus value of each item. The
partition key enables Cosmos DB to increase performance by dividing your data into distinct subsets that it can
access independently.
Create a storage account and function app
Next, create an Azure Storage account, which is required by Azure Functions, then create the function app. Use
the following commands:
Bash
Cmd
az storage account create \
--resource-group $RESOURCE_GROUP \
--name $STORAGE_ACCOUNT \
--sku Standard_LRS
az functionapp create \
--resource-group $RESOURCE_GROUP \
--name $FUNCTION_APP \
--storage-account $STORAGE_ACCOUNT \
--consumption-plan-location $LOCATION \
--runtime java \
--functions-version 3
When the az functionapp create command creates your function app, it also creates an Application Insights
resource with the same name. The function app is automatically configured with a setting named
APPINSIGHTS_INSTRUMENTATIONKEY that connects it to Application Insights. You can view app telemetry after you
deploy your functions to Azure, as described later in this tutorial.
AZURE_WEB_JOBS_STORAGE=$( \
az storage account show-connection-string \
--name $STORAGE_ACCOUNT \
--query connectionString \
--output tsv)
echo $AZURE_WEB_JOBS_STORAGE
EVENT_HUB_CONNECTION_STRING=$( \
az eventhubs eventhub authorization-rule keys list \
--resource-group $RESOURCE_GROUP \
--name $EVENT_HUB_AUTHORIZATION_RULE \
--eventhub-name $EVENT_HUB_NAME \
--namespace-name $EVENT_HUB_NAMESPACE \
--query primaryConnectionString \
--output tsv)
echo $EVENT_HUB_CONNECTION_STRING
COSMOS_DB_CONNECTION_STRING=$( \
az cosmosdb keys list \
--resource-group $RESOURCE_GROUP \
--name $COSMOS_DB_ACCOUNT \
--type connection-strings \
--query 'connectionStrings[0].connectionString' \
--output tsv)
echo $COSMOS_DB_CONNECTION_STRING
These variables are set to values retrieved from Azure CLI commands. Each command uses a JMESPath query to
extract the connection string from the JSON payload returned. The connection strings are also displayed using
echo so you can confirm that they've been retrieved successfully.
Bash
Cmd
Your Azure resources have now been created and configured to work properly together.
RESOURCE_GROUP=<value>
FUNCTION_APP=<value>
Bash
Cmd
Bash
Cmd
cd telemetry-functions
rm -r src/test
Bash
Cmd
package com.example;
import com.example.TelemetryItem.status;
import com.microsoft.azure.functions.annotation.Cardinality;
import com.microsoft.azure.functions.annotation.CosmosDBOutput;
import com.microsoft.azure.functions.annotation.EventHubOutput;
import com.microsoft.azure.functions.annotation.EventHubTrigger;
import com.microsoft.azure.functions.annotation.FunctionName;
import com.microsoft.azure.functions.annotation.TimerTrigger;
import com.microsoft.azure.functions.ExecutionContext;
import com.microsoft.azure.functions.OutputBinding;
@FunctionName("generateSensorData")
@EventHubOutput(
name = "event",
eventHubName = "", // blank because the value is included in the connection string
connection = "EventHubConnectionString")
public TelemetryItem generateSensorData(
@TimerTrigger(
name = "timerInfo",
schedule = "*/10 * * * * *") // every 10 seconds
String timerInfo,
final ExecutionContext context) {
@FunctionName("processSensorData")
public void processSensorData(
@EventHubTrigger(
name = "msg",
eventHubName = "", // blank because the value is included in the connection string
cardinality = Cardinality.ONE,
connection = "EventHubConnectionString")
TelemetryItem item,
@CosmosDBOutput(
name = "databaseOutput",
databaseName = "TelemetryDb",
collectionName = "TelemetryInfo",
connectionStringSetting = "CosmosDBConnectionString")
OutputBinding<TelemetryItem> document,
final ExecutionContext context) {
document.setValue(item);
}
}
As you can see, this file contains two functions, generateSensorData and processSensorData . The
generateSensorData function simulates a sensor that sends temperature and pressure readings to the event hub.
A timer trigger runs the function every 10 seconds, and an event hub output binding sends the return value to
the event hub.
When the event hub receives the message, it generates an event. The processSensorData function runs when it
receives the event. It then processes the event data and uses an Azure Cosmos DB output binding to send the
results to Azure Cosmos DB.
The data used by these functions is stored using a class called TelemetryItem , which you'll need to implement.
Create a new file called TelemetryItem.java in the same location as Function.java and add the following code:
package com.example;
@Override
public String toString() {
return "TelemetryItem={id=" + id + ",temperature="
+ temperature + ",pressure=" + pressure + "}";
}
Run locally
You can now build and run the functions locally and see data appear in your Azure Cosmos DB.
Use the following Maven commands to build and run the functions:
Bash
Cmd
mvn clean package
mvn azure-functions:run
After some build and startup messages, you'll see output similar to the following example for each time the
functions run:
You can then go to the Azure portal and navigate to your Azure Cosmos DB account. Select Data Explorer ,
expand Telemetr yInfo , then select Items to view your data when it arrives.
Bash
Cmd
mvn azure-functions:deploy
Your functions now run in Azure, and continue to accumulate data in your Azure Cosmos DB. You can view your
deployed function app in the Azure portal, and view app telemetry through the connected Application Insights
resource, as shown in the following screenshots:
Live Metrics Stream:
Performance:
Clean up resources
When you're finished with the Azure resources you created in this tutorial, you can delete them using the
following command:
Bash
Cmd
az group delete --name $RESOURCE_GROUP
Next steps
In this tutorial, you learned how to create an Azure Function that handles Event Hub events and updates a
Cosmos DB. For more information, see the Azure Functions Java developer guide. For information on the
annotations used, see the com.microsoft.azure.functions.annotation reference.
This tutorial used environment variables and application settings to store secrets such as connection strings. For
information on storing these secrets in Azure Key Vault, see Use Key Vault references for App Service and Azure
Functions.
Next, learn how to use Azure Pipelines CI/CD for automated deployment:
Build and deploy Java to Azure Functions
Azure CLI Samples
8/2/2022 • 2 minutes to read • Edit Online
The following table includes links to bash scripts for Azure Functions that use the Azure CLI.
Create a function app for serverless execution Create a function app in a Consumption plan.
Create a serverless Python function app Create a Python function app in a Consumption plan.
Create a function app in a scalable Premium plan Create a function app in a Premium plan.
Create a function app in a dedicated (App Service) plan Create a function app in a dedicated App Service plan.
Create a function app and connect to a storage account Create a function app and connect it to a storage account.
Create a function app and connect to an Azure Cosmos DB Create a function app and connect it to an Azure Cosmos
DB.
Create a Python function app and mount a Azure Files share By mounting a share to your Linux function app, you can
leverage existing machine learning models or other data in
your functions.
Deploy from GitHub Create a function app that deploys from a GitHub
repository.
Create a function app for serverless code execution
8/2/2022 • 3 minutes to read • Edit Online
This Azure Functions sample script creates a function app, which is a container for your functions. The function
app is created using the Consumption plan, which is ideal for event-driven serverless workloads.
If you don't have an Azure subscription, create an Azure free account before you begin.
Prerequisites
Use the Bash environment in Azure Cloud Shell. For more information, see Azure Cloud Shell Quickstart -
Bash.
If you prefer to run CLI reference commands locally, install the Azure CLI. If you're running on Windows
or macOS, consider running Azure CLI in a Docker container. For more information, see How to run the
Azure CLI in a Docker container.
If you're using a local installation, sign in to the Azure CLI by using the az login command. To finish
the authentication process, follow the steps displayed in your terminal. For other sign-in options,
see Sign in with the Azure CLI.
When you're prompted, install the Azure CLI extension on first use. For more information about
extensions, see Use extensions with the Azure CLI.
Run az version to find the version and dependent libraries that are installed. To upgrade to the
latest version, run az upgrade.
Sample script
Launch Azure Cloud Shell
The Azure Cloud Shell is a free interactive shell that you can use to run the steps in this article. It has common
Azure tools preinstalled and configured to use with your account.
To open the Cloud Shell, just select Tr y it from the upper right corner of a code block. You can also launch Cloud
Shell in a separate browser tab by going to https://shell.azure.com.
When Cloud Shell opens, verify that Bash is selected for your environment. Subsequent sessions will use Azure
CLI in a Bash environment, Select Copy to copy the blocks of code, paste it into the Cloud Shell, and press Enter
to run it.
Sign in to Azure
Cloud Shell is automatically authenticated under the initial account signed-in with. Use the following script to
sign in using a different subscription, replacing <Subscription ID> with your Azure Subscription ID. If you don't
have an Azure subscription, create an Azure free account before you begin.
# Variable block
let "randomIdentifier=$RANDOM*$RANDOM"
location="eastus"
resourceGroup="msdocs-azure-functions-rg-$randomIdentifier"
tag="create-function-app-consumption"
storage="msdocsaccount$randomIdentifier"
functionApp="msdocs-serverless-function-$randomIdentifier"
skuStorage="Standard_LRS"
functionsVersion="4"
Clean up resources
Use the following command to remove the resource group and all resources associated with it using the az
group delete command - unless you have an ongoing need for these resources. Some of these resources may
take a while to create, as well as to delete.
Sample reference
Each command in the table links to command specific documentation. This script uses the following commands:
C OMMAND N OT ES
az group create Creates a resource group in which all resources are stored.
Next steps
For more information on the Azure CLI, see Azure CLI documentation.
Additional Azure Functions CLI script samples can be found in the Azure Functions documentation.
Create a serverless Python function app using Azure
CLI
8/2/2022 • 3 minutes to read • Edit Online
This Azure Functions sample script creates a function app, which is a container for your functions. This script
creates an Azure Function app using the Consumption plan.
NOTE
The function app created runs on Python version 3.9. Python version 3.7 and 3.8 are also supported by Azure Functions.
If you don't have an Azure subscription, create an Azure free account before you begin.
Prerequisites
Use the Bash environment in Azure Cloud Shell. For more information, see Azure Cloud Shell Quickstart -
Bash.
If you prefer to run CLI reference commands locally, install the Azure CLI. If you're running on Windows
or macOS, consider running Azure CLI in a Docker container. For more information, see How to run the
Azure CLI in a Docker container.
If you're using a local installation, sign in to the Azure CLI by using the az login command. To finish
the authentication process, follow the steps displayed in your terminal. For other sign-in options,
see Sign in with the Azure CLI.
When you're prompted, install the Azure CLI extension on first use. For more information about
extensions, see Use extensions with the Azure CLI.
Run az version to find the version and dependent libraries that are installed. To upgrade to the
latest version, run az upgrade.
Sample script
Launch Azure Cloud Shell
The Azure Cloud Shell is a free interactive shell that you can use to run the steps in this article. It has common
Azure tools preinstalled and configured to use with your account.
To open the Cloud Shell, just select Tr y it from the upper right corner of a code block. You can also launch Cloud
Shell in a separate browser tab by going to https://shell.azure.com.
When Cloud Shell opens, verify that Bash is selected for your environment. Subsequent sessions will use Azure
CLI in a Bash environment, Select Copy to copy the blocks of code, paste it into the Cloud Shell, and press Enter
to run it.
Sign in to Azure
Cloud Shell is automatically authenticated under the initial account signed-in with. Use the following script to
sign in using a different subscription, replacing <Subscription ID> with your Azure Subscription ID. If you don't
have an Azure subscription, create an Azure free account before you begin.
subscription="<subscriptionId>" # add subscription here
# Variable block
let "randomIdentifier=$RANDOM*$RANDOM"
location="eastus"
resourceGroup="msdocs-azure-functions-rg-$randomIdentifier"
tag="create-function-app-consumption-python"
storage="msdocsaccount$randomIdentifier"
functionApp="msdocs-serverless-python-function-$randomIdentifier"
skuStorage="Standard_LRS"
functionsVersion="4"
pythonVersion="3.9" #Allowed values: 3.7, 3.8, and 3.9
Clean up resources
Use the following command to remove the resource group and all resources associated with it using the az
group delete command - unless you have an ongoing need for these resources. Some of these resources may
take a while to create, as well as to delete.
Sample reference
Each command in the table links to command specific documentation. This script uses the following commands:
C OMMAND N OT ES
az group create Creates a resource group in which all resources are stored.
This Azure Functions sample script creates a function app, which is a container for your functions. The function
app that is created uses a scalable Premium plan.
If you don't have an Azure subscription, create an Azure free account before you begin.
Prerequisites
Use the Bash environment in Azure Cloud Shell. For more information, see Azure Cloud Shell Quickstart -
Bash.
If you prefer to run CLI reference commands locally, install the Azure CLI. If you're running on Windows
or macOS, consider running Azure CLI in a Docker container. For more information, see How to run the
Azure CLI in a Docker container.
If you're using a local installation, sign in to the Azure CLI by using the az login command. To finish
the authentication process, follow the steps displayed in your terminal. For other sign-in options,
see Sign in with the Azure CLI.
When you're prompted, install the Azure CLI extension on first use. For more information about
extensions, see Use extensions with the Azure CLI.
Run az version to find the version and dependent libraries that are installed. To upgrade to the
latest version, run az upgrade.
Sample script
Launch Azure Cloud Shell
The Azure Cloud Shell is a free interactive shell that you can use to run the steps in this article. It has common
Azure tools preinstalled and configured to use with your account.
To open the Cloud Shell, just select Tr y it from the upper right corner of a code block. You can also launch Cloud
Shell in a separate browser tab by going to https://shell.azure.com.
When Cloud Shell opens, verify that Bash is selected for your environment. Subsequent sessions will use Azure
CLI in a Bash environment, Select Copy to copy the blocks of code, paste it into the Cloud Shell, and press Enter
to run it.
Sign in to Azure
Cloud Shell is automatically authenticated under the initial account signed-in with. Use the following script to
sign in using a different subscription, replacing <Subscription ID> with your Azure Subscription ID. If you don't
have an Azure subscription, create an Azure free account before you begin.
# Variable block
let "randomIdentifier=$RANDOM*$RANDOM"
location="eastus"
resourceGroup="msdocs-azure-functions-rg-$randomIdentifier"
tag="create-function-app-premium-plan"
storage="msdocsaccount$randomIdentifier"
premiumPlan="msdocs-premium-plan-$randomIdentifier"
functionApp="msdocs-function-$randomIdentifier"
skuStorage="Standard_LRS" # Allowed values: Standard_LRS, Standard_GRS, Standard_RAGRS, Standard_ZRS,
Premium_LRS, Premium_ZRS, Standard_GZRS, Standard_RAGZRS
skuPlan="EP1"
functionsVersion="4"
Clean up resources
Use the following command to remove the resource group and all resources associated with it using the az
group delete command - unless you have an ongoing need for these resources. Some of these resources may
take a while to create, as well as to delete.
Sample reference
Each command in the table links to command specific documentation. This script uses the following commands:
C OMMAND N OT ES
az group create Creates a resource group in which all resources are stored.
This Azure Functions sample script creates a function app, which is a container for your functions. The function
app that is created uses a dedicated App Service plan, which means your server resources are always on.
If you don't have an Azure subscription, create an Azure free account before you begin.
Prerequisites
Use the Bash environment in Azure Cloud Shell. For more information, see Azure Cloud Shell Quickstart -
Bash.
If you prefer to run CLI reference commands locally, install the Azure CLI. If you're running on Windows
or macOS, consider running Azure CLI in a Docker container. For more information, see How to run the
Azure CLI in a Docker container.
If you're using a local installation, sign in to the Azure CLI by using the az login command. To finish
the authentication process, follow the steps displayed in your terminal. For other sign-in options,
see Sign in with the Azure CLI.
When you're prompted, install the Azure CLI extension on first use. For more information about
extensions, see Use extensions with the Azure CLI.
Run az version to find the version and dependent libraries that are installed. To upgrade to the
latest version, run az upgrade.
Sample script
Launch Azure Cloud Shell
The Azure Cloud Shell is a free interactive shell that you can use to run the steps in this article. It has common
Azure tools preinstalled and configured to use with your account.
To open the Cloud Shell, just select Tr y it from the upper right corner of a code block. You can also launch Cloud
Shell in a separate browser tab by going to https://shell.azure.com.
When Cloud Shell opens, verify that Bash is selected for your environment. Subsequent sessions will use Azure
CLI in a Bash environment, Select Copy to copy the blocks of code, paste it into the Cloud Shell, and press Enter
to run it.
Sign in to Azure
Cloud Shell is automatically authenticated under the initial account signed-in with. Use the following script to
sign in using a different subscription, replacing <Subscription ID> with your Azure Subscription ID. If you don't
have an Azure subscription, create an Azure free account before you begin.
# Variable block
let "randomIdentifier=$RANDOM*$RANDOM"
location="eastus"
resourceGroup="msdocs-azure-functions-rg-$randomIdentifier"
tag="create-function-app-consumption"
storage="msdocsaccount$randomIdentifier"
appServicePlan="msdocs-app-service-plan-$randomIdentifier"
functionApp="msdocs-serverless-function-$randomIdentifier"
skuStorage="Standard_LRS"
skuPlan="B1"
functionsVersion="4"
Clean up resources
Use the following command to remove the resource group and all resources associated with it using the az
group delete command - unless you have an ongoing need for these resources. Some of these resources may
take a while to create, as well as to delete.
Sample reference
Each command in the table links to command specific documentation. This script uses the following commands:
C OMMAND N OT ES
az group create Creates a resource group in which all resources are stored.
Next steps
For more information on the Azure CLI, see Azure CLI documentation.
Additional Azure Functions CLI script samples can be found in the Azure Functions documentation.
Create a function app with a named Storage
account connection
8/2/2022 • 3 minutes to read • Edit Online
This Azure Functions sample script creates a function app and connects the function to an Azure Storage
account. The created app setting that contains the storage connection string can be used with a storage trigger
or binding.
If you don't have an Azure subscription, create an Azure free account before you begin.
Prerequisites
Use the Bash environment in Azure Cloud Shell. For more information, see Azure Cloud Shell Quickstart -
Bash.
If you prefer to run CLI reference commands locally, install the Azure CLI. If you're running on Windows
or macOS, consider running Azure CLI in a Docker container. For more information, see How to run the
Azure CLI in a Docker container.
If you're using a local installation, sign in to the Azure CLI by using the az login command. To finish
the authentication process, follow the steps displayed in your terminal. For other sign-in options,
see Sign in with the Azure CLI.
When you're prompted, install the Azure CLI extension on first use. For more information about
extensions, see Use extensions with the Azure CLI.
Run az version to find the version and dependent libraries that are installed. To upgrade to the
latest version, run az upgrade.
Sample script
Launch Azure Cloud Shell
The Azure Cloud Shell is a free interactive shell that you can use to run the steps in this article. It has common
Azure tools preinstalled and configured to use with your account.
To open the Cloud Shell, just select Tr y it from the upper right corner of a code block. You can also launch Cloud
Shell in a separate browser tab by going to https://shell.azure.com.
When Cloud Shell opens, verify that Bash is selected for your environment. Subsequent sessions will use Azure
CLI in a Bash environment, Select Copy to copy the blocks of code, paste it into the Cloud Shell, and press Enter
to run it.
Sign in to Azure
Cloud Shell is automatically authenticated under the initial account signed-in with. Use the following script to
sign in using a different subscription, replacing <Subscription ID> with your Azure Subscription ID. If you don't
have an Azure subscription, create an Azure free account before you begin.
subscription="<subscriptionId>" # add subscription here
# Variable block
let "randomIdentifier=$RANDOM*$RANDOM"
location="eastus"
resourceGroup="msdocs-azure-functions-rg-$randomIdentifier"
tag="create-function-app-connect-to-storage-account"
storage="msdocsaccount$randomIdentifier"
functionApp="msdocs-serverless-function-$randomIdentifier"
skuStorage="Standard_LRS"
functionsVersion="4"
Clean up resources
Use the following command to remove the resource group and all resources associated with it using the az
group delete command - unless you have an ongoing need for these resources. Some of these resources may
take a while to create, as well as to delete.
Sample reference
This script uses the following commands. Each command in the table links to command specific documentation.
C OMMAND N OT ES
az storage account show-connection-string Gets the connection string for the account.
az functionapp config appsettings set Sets the connection string as an app setting in the function
app.
Next steps
For more information on the Azure CLI, see Azure CLI documentation.
Additional Azure Functions CLI script samples can be found in the Azure Functions documentation.
Create an Azure Function that connects to an Azure
Cosmos DB
8/2/2022 • 3 minutes to read • Edit Online
This Azure Functions sample script creates a function app and connects the function to an Azure Cosmos DB
database. It makes the connection using a Azure Cosmos DB endpoint and access key that it adds to app
settings. The created app setting that contains the connection can be used with an Azure Cosmos DB trigger or
binding.
If you don't have an Azure subscription, create an Azure free account before you begin.
Prerequisites
Use the Bash environment in Azure Cloud Shell. For more information, see Azure Cloud Shell Quickstart -
Bash.
If you prefer to run CLI reference commands locally, install the Azure CLI. If you're running on Windows
or macOS, consider running Azure CLI in a Docker container. For more information, see How to run the
Azure CLI in a Docker container.
If you're using a local installation, sign in to the Azure CLI by using the az login command. To finish
the authentication process, follow the steps displayed in your terminal. For other sign-in options,
see Sign in with the Azure CLI.
When you're prompted, install the Azure CLI extension on first use. For more information about
extensions, see Use extensions with the Azure CLI.
Run az version to find the version and dependent libraries that are installed. To upgrade to the
latest version, run az upgrade.
Sample script
Launch Azure Cloud Shell
The Azure Cloud Shell is a free interactive shell that you can use to run the steps in this article. It has common
Azure tools preinstalled and configured to use with your account.
To open the Cloud Shell, just select Tr y it from the upper right corner of a code block. You can also launch Cloud
Shell in a separate browser tab by going to https://shell.azure.com.
When Cloud Shell opens, verify that Bash is selected for your environment. Subsequent sessions will use Azure
CLI in a Bash environment, Select Copy to copy the blocks of code, paste it into the Cloud Shell, and press Enter
to run it.
Sign in to Azure
Cloud Shell is automatically authenticated under the initial account signed-in with. Use the following script to
sign in using a different subscription, replacing <Subscription ID> with your Azure Subscription ID. If you don't
have an Azure subscription, create an Azure free account before you begin.
subscription="<subscriptionId>" # add subscription here
# Variable block
let "randomIdentifier=$RANDOM*$RANDOM"
location="eastus"
resourceGroup="msdocs-azure-functions-rg-$randomIdentifier"
tag="create-function-app-connect-to-cosmos-db"
storage="msdocsaccount$randomIdentifier"
functionApp="msdocs-serverless-function-$randomIdentifier"
skuStorage="Standard_LRS"
functionsVersion="4"
# Create an Azure Cosmos DB database account using the same function app name.
echo "Creating $functionApp"
az cosmosdb create --name $functionApp --resource-group $resourceGroup
key=$(az cosmosdb keys list --name $functionApp --resource-group $resourceGroup --query primaryMasterKey --
output tsv)
echo $key
# Configure function app settings to use the Azure Cosmos DB connection string.
az functionapp config appsettings set --name $functionApp --resource-group $resourceGroup --setting
CosmosDB_Endpoint=$endpoint CosmosDB_Key=$key
Clean up resources
Use the following command to remove the resource group and all resources associated with it using the az
group delete command - unless you have an ongoing need for these resources. Some of these resources may
take a while to create, as well as to delete.
Sample reference
C OMMAND N OT ES
az functionapp config appsettings set Sets the connection string as an app setting in the function
app.
Next steps
For more information on the Azure CLI, see Azure CLI documentation.
More Azure Functions CLI script samples can be found in the Azure Functions documentation.
Mount a file share to a Python function app using
Azure CLI
8/2/2022 • 4 minutes to read • Edit Online
This Azure Functions sample script creates a function app using the Consumption plan and creates a share in
Azure Files. It then mounts the share so that the data can be accessed by your functions.
NOTE
The function app created runs on Python version 3.9. Azure Functions also supports Python versions 3.7 and 3.8.
If you don't have an Azure subscription, create an Azure free account before you begin.
Prerequisites
Use the Bash environment in Azure Cloud Shell. For more information, see Azure Cloud Shell Quickstart -
Bash.
If you prefer to run CLI reference commands locally, install the Azure CLI. If you're running on Windows
or macOS, consider running Azure CLI in a Docker container. For more information, see How to run the
Azure CLI in a Docker container.
If you're using a local installation, sign in to the Azure CLI by using the az login command. To finish
the authentication process, follow the steps displayed in your terminal. For other sign-in options,
see Sign in with the Azure CLI.
When you're prompted, install the Azure CLI extension on first use. For more information about
extensions, see Use extensions with the Azure CLI.
Run az version to find the version and dependent libraries that are installed. To upgrade to the
latest version, run az upgrade.
Sample script
Launch Azure Cloud Shell
The Azure Cloud Shell is a free interactive shell that you can use to run the steps in this article. It has common
Azure tools preinstalled and configured to use with your account.
To open the Cloud Shell, just select Tr y it from the upper right corner of a code block. You can also launch Cloud
Shell in a separate browser tab by going to https://shell.azure.com.
When Cloud Shell opens, verify that Bash is selected for your environment. Subsequent sessions will use Azure
CLI in a Bash environment, Select Copy to copy the blocks of code, paste it into the Cloud Shell, and press Enter
to run it.
Sign in to Azure
Cloud Shell is automatically authenticated under the initial account signed-in with. Use the following script to
sign in using a different subscription, replacing <Subscription ID> with your Azure Subscription ID. If you don't
have an Azure subscription, create an Azure free account before you begin.
subscription="<subscriptionId>" # add subscription here
# Variable block
let "randomIdentifier=$RANDOM*$RANDOM"
location="eastus"
resourceGroup="msdocs-azure-functions-rg-$randomIdentifier"
tag="functions-cli-mount-files-storage-linux"
export AZURE_STORAGE_ACCOUNT="msdocsstorage$randomIdentifier"
functionApp="msdocs-serverless-function-$randomIdentifier"
skuStorage="Standard_LRS"
functionsVersion="4"
pythonVersion="3.9" #Allowed values: 3.7, 3.8, and 3.9
share="msdocs-fileshare-$randomIdentifier"
directory="msdocs-directory-$randomIdentifier"
shareId="msdocs-share-$randomIdentifier"
mountPath="/mounted-$randomIdentifier"
Clean up resources
Use the following command to remove the resource group and all resources associated with it using the az
group delete command - unless you have an ongoing need for these resources. Some of these resources may
take a while to create, as well as to delete.
az group delete --name $resourceGroup
Sample reference
Each command in the table links to command specific documentation. This script uses the following commands:
C OMMAND N OT ES
az group create Creates a resource group in which all resources are stored.
az webapp config storage-account add Mounts the share to the function app.
az webapp config storage-account list Shows file shares mounted to the function app.
Next steps
For more information on the Azure CLI, see Azure CLI documentation.
Additional Azure Functions CLI script samples can be found in the Azure Functions documentation.
Create a function app in Azure that is deployed
from GitHub
8/2/2022 • 3 minutes to read • Edit Online
This Azure Functions sample script creates a function app using the Consumption plan, along with its related
resources. The script also configures your function code for continuous deployment from a public GitHub
repository. There is also commented out code for using a private GitHub repository.
If you don't have an Azure subscription, create an Azure free account before you begin.
Prerequisites
Use the Bash environment in Azure Cloud Shell. For more information, see Azure Cloud Shell Quickstart -
Bash.
If you prefer to run CLI reference commands locally, install the Azure CLI. If you're running on Windows
or macOS, consider running Azure CLI in a Docker container. For more information, see How to run the
Azure CLI in a Docker container.
If you're using a local installation, sign in to the Azure CLI by using the az login command. To finish
the authentication process, follow the steps displayed in your terminal. For other sign-in options,
see Sign in with the Azure CLI.
When you're prompted, install the Azure CLI extension on first use. For more information about
extensions, see Use extensions with the Azure CLI.
Run az version to find the version and dependent libraries that are installed. To upgrade to the
latest version, run az upgrade.
Sample script
Launch Azure Cloud Shell
The Azure Cloud Shell is a free interactive shell that you can use to run the steps in this article. It has common
Azure tools preinstalled and configured to use with your account.
To open the Cloud Shell, just select Tr y it from the upper right corner of a code block. You can also launch Cloud
Shell in a separate browser tab by going to https://shell.azure.com.
When Cloud Shell opens, verify that Bash is selected for your environment. Subsequent sessions will use Azure
CLI in a Bash environment, Select Copy to copy the blocks of code, paste it into the Cloud Shell, and press Enter
to run it.
Sign in to Azure
Cloud Shell is automatically authenticated under the initial account signed-in with. Use the following script to
sign in using a different subscription, replacing <Subscription ID> with your Azure Subscription ID. If you don't
have an Azure subscription, create an Azure free account before you begin.
subscription="<subscriptionId>" # add subscription here
# Create a function app with source files deployed from the specified GitHub repo.
echo "Creating $functionApp"
az functionapp create --name $functionApp --storage-account $storage --consumption-plan-location "$location"
--resource-group $resourceGroup --deployment-source-url $gitrepo --deployment-source-branch main --
functions-version $functionsVersion --runtime $runtime
Clean up resources
Use the following command to remove the resource group and all resources associated with it using the az
group delete command - unless you have an ongoing need for these resources. Some of these resources may
take a while to create, as well as to delete.
Sample reference
Each command in the table links to command specific documentation. This script uses the following commands:
C OMMAND N OT ES
az group create Creates a resource group in which all resources are stored.
C OMMAND N OT ES
az storage account create Creates the storage account required by the function app.
Next steps
For more information on the Azure CLI, see Azure CLI documentation.
Additional Azure Functions CLI script samples can be found in the Azure Functions documentation.
Create function app resources in Azure using
PowerShell
8/2/2022 • 9 minutes to read • Edit Online
The Azure PowerShell example scripts in this article create function apps and other resources required to host
your functions in Azure. A function app provides an execution context in which your functions are executed. All
functions running in a function app share the same resources and connections, and they're all scaled together.
After the resources are created, you can deploy your project files to the new function app. To learn more, see
Deployment methods.
Every function app requires your PowerShell scripts to create the following resources:
Prerequisites
If you choose to use Azure PowerShell locally:
Install the Az PowerShell module.
Connect to your Azure account using the Connect-AzAccount cmdlet.
If you choose to use Azure Cloud Shell:
See Overview of Azure Cloud Shell for more information.
If you don't have an Azure subscription, create an Azure free account before you begin.
# Variable block
$randomIdentifier = Get-Random
$location = "eastus"
$resourceGroup = "msdocs-azure-functions-rg-$randomIdentifier"
$tag = @{script = "create-function-app-consumption"}
$storage = "msdocsaccount$randomIdentifier"
$functionApp = "msdocs-serverless-function-$randomIdentifier"
$skuStorage = "Standard_LRS"
$functionsVersion = "4"
# Variable block
$randomIdentifier = Get-Random
$location = "eastus"
$resourceGroup = "msdocs-azure-functions-rg-$randomIdentifier"
$tag = @{script = "create-function-app-consumption-python"}
$storage = "msdocsaccount$randomIdentifier"
$functionApp = "msdocs-serverless-python-function-$randomIdentifier"
$skuStorage = "Standard_LRS"
$functionsVersion = "4"
$pythonVersion = "3.9" #Allowed values: 3.7, 3.8, and 3.9
# Variable block
$randomIdentifier = Get-Random
$location = "eastus"
$resourceGroup = "msdocs-azure-functions-rg-$randomIdentifier"
$tag = @{script = "create-function-app-premium-plan"}
$storage = "msdocsaccount$randomIdentifier"
$premiumPlan = "msdocs-premium-plan-$randomIdentifier"
$functionApp = "msdocs-function-$randomIdentifier"
$skuStorage = "Standard_LRS" # Allowed values: Standard_LRS, Standard_GRS, Standard_RAGRS, Standard_ZRS,
Premium_LRS, Premium_ZRS, Standard_GZRS, Standard_RAGZRS
$skuPlan = "EP1"
$functionsVersion = "4"
# Variable block
$randomIdentifier = Get-Random
$location = "eastus"
$resourceGroup = "msdocs-azure-functions-rg-$randomIdentifier"
$tag = @{script = "create-function-app-app-service-plan"}
$storage = "msdocsaccount$randomIdentifier"
$appServicePlan = "msdocs-app-service-plan-$randomIdentifier"
$functionApp = "msdocs-serverless-function-$randomIdentifier"
$skuStorage = "Standard_LRS"
$skuPlan = "B1"
$functionsVersion = "4"
# Variable block
$randomIdentifier = Get-Random
$location = "eastus"
$resourceGroup = "msdocs-azure-functions-rg-$randomIdentifier"
$tag = @{script = "create-function-app-connect-to-storage-account"}
$storage = "msdocsaccount$randomIdentifier"
$functionApp = "msdocs-serverless-function-$randomIdentifier"
$skuStorage = "Standard_LRS"
$functionsVersion = "4"
# Variable block
$randomIdentifier = Get-Random
$location = "eastus"
$resourceGroup = "msdocs-azure-functions-rg-$randomIdentifier"
$tag = @{script = "create-function-app-connect-to-cosmos-db"}
$storage = "msdocsaccount$randomIdentifier"
$functionApp = "msdocs-serverless-function-$randomIdentifier"
$skuStorage = "Standard_LRS"
$functionsVersion = "4"
# Create an Azure Cosmos DB database account using the same function app name.
Write-Host "Creating $functionApp"
New-AzCosmosDBAccount -Name $functionApp -ResourceGroupName $resourceGroup -Location $location
# Configure function app settings to use the Azure Cosmos DB connection string.
Update-AzFunctionAppSetting -Name $functionApp -ResourceGroupName $resourceGroup -AppSetting
@{CosmosDB_Endpoint = $endpoint; CosmosDB_Key = $key}
# Variable block
$randomIdentifier = Get-Random
$location = "eastus"
$resourceGroup = "msdocs-azure-functions-rg-$randomIdentifier"
$tag = @{script = "deploy-function-app-with-function-github"}
$storage = "msdocsaccount$randomIdentifier"
$functionApp = "mygithubfunc$randomIdentifier"
$skuStorage = "Standard_LRS"
$functionsVersion = "4"
$runtime = "Node"
# Public GitHub repository containing an Azure Functions code project.
$gitrepo = "https://github.com/Azure-Samples/functions-quickstart-javascript"
<# Set GitHub personal access token (PAT) to enable authenticated GitHub deployment in your subscription
when using a private repo.
$token = <Replace with a GitHub access token when using a private repo.>
$propertiesObject = @{
token = $token
}
# Configure GitHub deployment from a public GitHub repo and deploy once.
$propertiesObject = @{
repoUrl = $gitrepo
branch = 'main'
isManualIntegration = $True # $False when using a private repo
}
# Variable block
$randomIdentifier = Get-Random
$location = "eastus"
$resourceGroup = "msdocs-azure-functions-rg-$randomIdentifier"
$tag = @{script = "functions-cli-mount-files-storage-linux"}
$storage = "msdocsaccount$randomIdentifier"
$functionApp = "msdocs-serverless-function-$randomIdentifier"
$skuStorage = "Standard_LRS"
$functionsVersion = "4"
$pythonVersion = "3.9" #Allowed values: 3.7, 3.8, and 3.9
$share = "msdocs-fileshare-$randomIdentifier"
$directory = "msdocs-directory-$randomIdentifier"
$shareId = "msdocs-share-$randomIdentifier"
$mountPath = "/mounted-$randomIdentifier"
Mounted file shares are only supported on Linux. For more information, see Mount file shares.
Clean up resources
In the preceding steps, you created Azure resources in a resource group. If you don't expect to need these
resources in the future, delete the resource group by running the following PowerShell command:
Next steps
For more information on Azure PowerShell, see Azure PowerShell documentation.
Best practices for reliable Azure Functions
8/2/2022 • 9 minutes to read • Edit Online
Azure Functions is an event-driven, compute-on-demand experience that extends the existing Azure App Service
application platform with capabilities to implement code triggered by events occurring in Azure, in third-party
service, and in on-premises systems. Functions lets you build solutions by connecting to data sources or
messaging solutions, which makes it easier to process and react to events. Functions runs on Azure data centers,
which are complex with many integrated components. In a hosted cloud environment, it's expected that VMs can
occasionally restart or move, and systems upgrades will occur. Your functions apps also likely depend on
external APIs, Azure Services, and other databases, which are also prone to periodic unreliability.
This article details some best practices for designing and deploying efficient function apps that remain healthy
and perform well in a cloud-based environment.
On your first deployment using an ARM template, don't include WEBSITE_CONTENTSHARE , which is generated for
you.
You can use the following ARM template examples to help correctly configure these settings:
Consumption plan
Dedicated plan
Premium plan with VNET integration
Consumption plan with a deployment slot
Storage account configuration
When creating a function app, you must create or link to a general-purpose Azure Storage account that supports
Blob, Queue, and Table storage. Functions relies on Azure Storage for operations such as managing triggers and
logging function executions. The storage account connection string for your function app is found in the
AzureWebJobsStorage and WEBSITE_CONTENTAZUREFILECONNECTIONSTRING application settings.
Keep in mind the following considerations when creating this storage account:
To reduce latency, create the storage account in the same region as the function app.
To improve performance in production, use a separate storage account for each function app. This is
especially true with Durable Functions and Event Hub triggered functions.
For Event Hub triggered functions, don't use an account with Data Lake Storage enabled.
Handling large data sets
When running on Linux, you can add extra storage by mounting a file share. Mounting a share is a convenient
way for a function to process a large existing data set. To learn more, see Mount file shares.
Optimize deployments
When deploying a function app, it's important to keep in mind that the unit of deployment for functions in Azure
is the function app. All functions in a function app are deployed at the same time, usually from the same
deployment package.
Consider these options for a successful deployment:
Have your functions run from the deployment package. This run from package approach provides the
following benefits:
Reduces the risk of file copy locking issues.
Can be deployed directly to a production app, which does trigger a restart.
Know that all files in the package are available to your app.
Improves the performance of ARM template deployments.
May reduce cold-start times, particularly for JavaScript functions with large npm package trees.
Consider using continuous deployment to connect deployments to your source control solution.
Continuous deployments also let you run from the deployment package.
For Premium plan hosting, consider adding a warmup trigger to reduce latency when new instances are
added. To learn more, see Azure Functions warm-up trigger.
To minimize deployment downtime and to be able to roll back deployments, consider using deployment
slots. To learn more, see Azure Functions deployment slots.
Consider concurrency
As demand builds on your function app as a result of incoming events, function apps running in Consumption
and Premium plans are scaled out. It's important to understand how your function app responds to load and
how the triggers can be configured to handle incoming events. For a general overview, see Event-driven scaling
in Azure Functions.
Dedicated (App Service) plans require you to provide for scaling out your function apps.
Worker process count
In some cases, it's more efficient to handle the load by creating multiple processes, called language worker
processes, in the instance before scale-out. The maximum number of language worker processes allowed is
controlled by the FUNCTIONS_WORKER_PROCESS_COUNT setting. The default for this setting is 1 , which
means that multiple processes aren't used. After the maximum number of processes are reached, the function
app is scaled out to more instances to handle the load. This setting doesn't apply for C# class library functions,
which run in the host process.
When using FUNCTIONS_WORKER_PROCESS_COUNT on a Premium plan or Dedicated (App Service) plan, keep in mind
the number of cores provided by your plan. For example, the Premium plan EP2 provides two cores, so you
should start with a value of 2 and increase by two as needed, up to the maximum.
Trigger configuration
When planning for throughput and scaling, it's important to understand how the different types of triggers
process events. Some triggers allow you to control the batching behaviors and manage concurrency. Often
adjusting the values in these options can help each instance scale appropriately for the demands of the invoked
functions. These configuration options are applied to all triggers in a function app, and are maintained in the
host.json file for the app. See the Configuration section of the specific trigger reference for settings details.
To learn more about how Functions processes message streams, see Azure Functions reliable event processing.
Plan for connections
Function apps running in Consumption plan are subject to connection limits. These limits are enforced on a per-
instance basis. Because of these limits and as a general best practice, you should optimize your outbound
connections from your function code. To learn more, see Manage connections in Azure Functions.
Language -specific considerations
For your language of choice, keep in mind the following considerations:
C#
Java
JavaScript
PowerShell
Python
Maximize availability
Cold start is a key consideration for serverless architectures. To learn more, see Cold starts. If cold start is a
concern for your scenario, you can find a deeper dive in the post Understanding serverless cold start .
Premium plan is the recommended plan for reducing colds starts while maintaining dynamic scale. You can use
the following guidance to reduce cold starts and improve availability in all three hosting plans.
PLAN GUIDA N C E
Dedicated plans • Run on at least two instances with Azure App Service
Health Check enabled
• Implement autoscaling
PLAN GUIDA N C E
Consumption plan • Review your use of Singleton patterns and the concurrency
settings for bindings and triggers to avoid artificially placing
limits on how your function app scales.
• Review the functionAppScaleLimit setting, which can
limit scale-out
• Check for a Daily Usage Quota (GB-Sec) limit set during
development and testing. Consider removing this limit in
production environments.
Monitor effectively
Azure Functions offers built-in integration with Azure Application Insights to monitor your function execution
and traces written from your code. To learn more, see Monitor Azure Functions. Azure Monitor also provides
facilities for monitoring the health of the function app itself. To learn more, see Monitoring with Azure Monitor.
You should be aware of the following considerations when using Application Insights integration to monitor
your functions:
Make sure that the AzureWebJobsDashboard application setting is removed. This setting was supported
in older version of Functions. If it exists, removing AzureWebJobsDashboard improves performance of your
functions.
Review the Application Insights logs. If data you expect to find is missing, consider adjusting the sampling
settings to better capture your monitoring scenario. You can use the excludedTypes setting to exclude
certain types from sampling, such as Request or Exception . To learn more, see Configure sampling.
Azure Functions also allows you to send system-generated and user-generated logs to Azure Monitor Logs.
Integration with Azure Monitor Logs is currently in preview.
Build in redundancy
Your business needs might require that your functions always be available, even during a data center outage. To
learn how to use a multi-regional approach to keep your critical functions always running, see Azure Functions
geo-disaster recovery and high-availability.
Next steps
Manage your function app
Improve the performance and reliability of Azure
Functions
8/2/2022 • 9 minutes to read • Edit Online
This article provides guidance to improve the performance and reliability of your serverless function apps. For a
more general set of Azure Functions best practices, see Azure Functions best practices.
The following are best practices in how you build and architect your serverless solutions using Azure Functions.
NOTE
When using the Consumption plan, we recommend you always put each app in its own plan, since apps are scaled
independently anyway. For more information, see Multiple apps in the same plan.
Consider whether you want to group functions with different load profiles. For example, if you have a function
that processes many thousands of queue messages, and another that is only called occasionally but has high
memory requirements, you might want to deploy them in separate function apps so they get their own sets of
resources and they scale independently of each other.
Organize functions for configuration and deployment
Function apps have a host.json file, which is used to configure advanced behavior of function triggers and the
Azure Functions runtime. Changes to the host.json file apply to all functions within the app. If you have some
functions that need custom configurations, consider moving them into their own function app.
All functions in your local project are deployed together as a set of files to your function app in Azure. You might
need to deploy individual functions separately or use features like deployment slots for some functions and not
others. In such cases, you should deploy these functions (in separate code projects) to different function apps.
Organize functions by privilege
Connection strings and other credentials stored in application settings gives all of the functions in the function
app the same set of permissions in the associated resource. Consider minimizing the number of functions with
access to specific credentials by moving functions that don't use those credentials to a separate function app.
You can always use techniques such as function chaining to pass data between functions in different function
apps.
TIP
If you plan to use the HTTP or WebHook bindings, plan to avoid port exhaustion that can be caused by improper
instantiation of HttpClient . For more information, see How to manage connections in Azure Functions.
Next steps
For more information, see the following resources:
How to manage connections in Azure Functions
Azure App Service best practices
Manage connections in Azure Functions
8/2/2022 • 5 minutes to read • Edit Online
Functions in a function app share resources. Among those shared resources are connections: HTTP connections,
database connections, and connections to services such as Azure Storage. When many functions are running
concurrently in a Consumption plan, it's possible to run out of available connections. This article explains how to
code your functions to avoid using more connections than they need.
NOTE
Connection limits described in this article apply only when running in a Consumption plan. However, the techniques
described here may be beneficial when running on any plan.
Connection limit
The number of available connections in a Consumption plan is limited partly because a function app in this plan
runs in a sandbox environment. One of the restrictions that the sandbox imposes on your code is a limit on the
number of outbound connections, which is currently 600 active (1,200 total) connections per instance. When
you reach this limit, the functions runtime writes the following message to the logs:
Host thresholds exceeded: Connections . For more information, see the Functions service limits.
This limit is per instance. When the scale controller adds function app instances to handle more requests, each
instance has an independent connection limit. That means there's no global connection limit, and you can have
much more than 600 active connections across all active instances.
When troubleshooting, make sure that you have enabled Application Insights for your function app. Application
Insights lets you view metrics for your function apps like executions. For more information, see View telemetry
in Application Insights.
Static clients
To avoid holding more connections than necessary, reuse client instances rather than creating new ones with
each function invocation. We recommend reusing client connections for any language that you might write your
function in. For example, .NET clients like the HttpClient, DocumentClient, and Azure Storage clients can manage
connections if you use a single, static client.
Here are some guidelines to follow when you're using a service-specific client in an Azure Functions application:
Do not create a new client with every function invocation.
Do create a single, static client that every function invocation can use.
Consider creating a single, static client in a shared helper class if different functions use the same service.
A common question about HttpClient in .NET is "Should I dispose of my client?" In general, you dispose of
objects that implement IDisposable when you're done using them. But you don't dispose of a static client
because you aren't done using it when the function ends. You want the static client to live for the duration of
your application.
Azure Cosmos DB clients
C#
JavaScript
CosmosClient connects to an Azure Cosmos DB instance. The Azure Cosmos DB documentation recommends
that you use a singleton Azure Cosmos DB client for the lifetime of your application. The following example
shows one pattern for doing that in a function:
#r "Microsoft.Azure.Cosmos"
using Microsoft.Azure.Cosmos;
// Rest of function
}
Also, create a file named "function.proj" for your trigger and add the below content :
<Project Sdk="Microsoft.NET.Sdk">
<PropertyGroup>
<TargetFramework>netcoreapp3.1</TargetFramework>
</PropertyGroup>
<ItemGroup>
<PackageReference Include="Microsoft.Azure.Cosmos" Version="3.23.0" />
</ItemGroup>
</Project>
SqlClient connections
Your function code can use the .NET Framework Data Provider for SQL Server (SqlClient) to make connections
to a SQL relational database. This is also the underlying provider for data frameworks that rely on ADO.NET,
such as Entity Framework. Unlike HttpClient and DocumentClient connections, ADO.NET implements connection
pooling by default. But because you can still run out of connections, you should optimize connections to the
database. For more information, see SQL Server Connection Pooling (ADO.NET).
TIP
Some data frameworks, such as Entity Framework, typically get connection strings from the ConnectionStrings section
of a configuration file. In this case, you must explicitly add SQL database connection strings to the Connection strings
collection of your function app settings and in the local.settings.json file in your local project. If you're creating an instance
of SqlConnection in your function code, you should store the connection string value in Application settings with your
other connections.
Next steps
For more information about why we recommend static clients, see Improper instantiation antipattern.
For more Azure Functions performance tips, see Optimize the performance and reliability of Azure Functions.
Storage considerations for Azure Functions
8/2/2022 • 8 minutes to read • Edit Online
Azure Functions requires an Azure Storage account when you create a function app instance. The following
storage services may be used by your function app:
Azure Files File share used to store and run your function app code in a
Consumption Plan and Premium Plan.
Azure Files is set up by default, but you can create an app
without Azure Files under certain conditions.
Azure Queue Storage Used by task hubs in Durable Functions and for failure and
retry handling by specific Azure Functions triggers.
IMPORTANT
When using the Consumption/Premium hosting plan, your function code and binding configuration files are stored in
Azure Files in the main storage account. When you delete the main storage account, this content is deleted and cannot be
recovered.
Host ID considerations
Functions uses a host ID value as a way to uniquely identify a particular function app in stored artifacts. By
default, this ID is auto-generated from the name of the function app, truncated to the first 32 characters. This ID
is then used when storing per-app correlation and tracking information in the linked storage account. When you
have function apps with names longer than 32 characters and when the first 32 characters are identical, this
truncation can result in duplicate host ID values. When two function apps with identical host IDs use the same
storage account, you get a host ID collision because stored data can't be uniquely linked to the correct function
app.
Starting with version 3.x of the Functions runtime, host ID collision is detected and a warning is logged. In
version 4.x, an error is logged and the host is stopped, resulting in a hard failure. More details about host ID
collision can be found in this issue.
Avoiding host ID collisions
You can use the following strategies to avoid host ID collisions:
Use a separated storage account for each function app involved in the collision.
Rename one of your function apps to a value less than 32 characters in length, which changes the computed
host ID for the app and removes the collision.
Set an explicit host ID for one or more of the colliding apps. To learn more, see Host ID override.
IMPORTANT
Changing the storage account associated with an existing function app or changing the app's host ID can impact the
behavior of existing functions. For example, a Blob Storage trigger tracks whether it's processed individual blobs by writing
receipts under a specific host ID path in storage. When the host ID changes or you point to a new storage account,
previously processed blobs may be reprocessed.
To learn how to create app settings, see Work with application settings.
In this command, share-name is the name of the existing Azure Files share, and custom-id can be any string
that uniquely defines the share when mounted to the function app. Also, mount-path is the path from which the
share is accessed in your function app. mount-path must be in the format /dir-name , and it can't start with
/home .
For a complete example, see the scripts in Create a Python function app and mount a Azure Files share.
Currently, only a storage-type of AzureFiles is supported. You can only mount five shares to a given function
app. Mounting a file share may increase the cold start time by at least 200-300ms, or even more when the
storage account is in a different region.
The mounted share is available to your function code at the mount-path specified. For example, when
mount-path is /path/to/mount , you can access the target directory by file system APIs, as in the following
Python example:
import os
...
files_in_share = os.listdir("/path/to/mount")
Next steps
Learn more about Azure Functions hosting options.
Azure Functions scale and hosting
Azure Functions error handling and retries
8/2/2022 • 7 minutes to read • Edit Online
Handling errors in Azure Functions is important to avoid lost data, missed events, and to monitor the health of
your application. It's also important to understand the retry behaviors of event-based triggers.
This article describes general strategies for error handling and the available retry strategies.
IMPORTANT
The retry policy support in the runtime for triggers other than Timer and Event Hubs is being removed after this feature
becomes generally available (GA). Preview retry policy support for all triggers other than Timer and Event Hubs will be
removed in October 2022.
Handling errors
Errors raised in an Azure Functions can come from any of the following origins:
Use of built-in Azure Functions triggers and bindings.
Calls to APIs of underlying Azure services.
Calls to REST endpoints.
Calls to client libraries, packages, or third-party APIs.
Good error handling practices are important to avoid loss of data or missed messages. This section describes
some recommended error handling practices with links to more information.
Enable Application Insights
Azure Functions integrates with Application Insights to collect error data, performance data, and runtime logs.
You should use Application Insights to discover and better understand errors occurring in your function
executions. To learn more, see Monitor Azure Functions.
Use structured error handling
Capturing and logging errors is critical to monitoring the health of your application. The top-most level of any
function code should include a try/catch block. In the catch block, you can capture and log errors. For
information about what errors might be raised by bindings, see Binding error codes.
Plan your retry strategy
Several Functions bindings extensions provide built-in support for retries. In addition, the runtime lets you
define retry policies for Timer and Event Hubs triggered functions. To learn more, see Retries. For triggers that
don't provide retry behaviors, you may want to implement your own retry scheme.
Design for idempotency
The occurrence of errors when processing data can be a problem for your functions, especially when processing
messages. You need to consider what happens when the error occurs and how to avoid duplicate processing. To
learn more, see Designing Azure Functions for identical input.
Retries
There are two kinds of retries available for your functions: built-in retry behaviors of individual trigger
extensions and retry policies. The following table indicates which triggers support retries and where the retry
behavior is configured. It also links to more information about errors coming from the underlying services.
Retry policies
Starting with version 3.x of the Azure Functions runtime, you can define a retry policies for Timer and Event
Hubs triggers that are enforced by the Functions runtime. The retry policy tells the runtime to rerun a failed
execution until either successful completion occurs or the maximum number of retries is reached.
A retry policy is evaluated when a Timer or Event Hubs triggered function raises an uncaught exception. As a
best practice, you should catch all exceptions in your code and rethrow any errors that you want to result in a
retry. Event Hubs checkpoints won't be written until the retry policy for the execution has completed. Because of
this behavior, progress on the specific partition is paused until the current batch has completed.
Retry strategies
There are two retry strategies supported by policy that you can configure:
Fixed delay
Exponential backoff
{
"disabled": false,
"bindings": [
{
....
}
],
"retry": {
"strategy": "fixedDelay",
"maxRetryCount": 4,
"delayInterval": "00:00:10"
}
}
if context.retry_context.retry_count == context.retry_context.max_retry_count:
logging.warn(
f"Max retries of {context.retry_context.max_retry_count} for "
f"function {context.function_name} has been reached")
Fixed delay
Exponential backoff
@FunctionName("TimerTriggerJava1")
@FixedDelayRetry(maxRetryCount = 4, delayInterval = "00:00:10")
public void run(
@TimerTrigger(name = "timerInfo", schedule = "0 */5 * * * *") String timerInfo,
final ExecutionContext context
) {
context.getLogger().info("Java Timer trigger function executed at: " + LocalDateTime.now());
}
Next steps
Azure Functions triggers and bindings concepts
Best practices for reliable Azure Functions
Securing Azure Functions
8/2/2022 • 18 minutes to read • Edit Online
In many ways, planning for secure development, deployment, and operation of serverless functions is much the
same as for any web-based or cloud hosted application. Azure App Service provides the hosting infrastructure
for your function apps. This article provides security strategies for running your function code, and how App
Service can help you secure your functions.
The platform components of App Service, including Azure VMs, storage, network connections, web frameworks,
management and integration features, are actively secured and hardened. App Service goes through vigorous
compliance checks on a continuous basis to make sure that:
Your app resources are secured from the other customers' Azure resources.
VM instances and runtime software are regularly updated to address newly discovered vulnerabilities.
Communication of secrets (such as connection strings) between your app and other Azure resources (such as
SQL Database) stays within Azure and doesn't cross any network boundaries. Secrets are always encrypted
when stored.
All communication over the App Service connectivity features, such as hybrid connection, is encrypted.
Connections with remote management tools like Azure PowerShell, Azure CLI, Azure SDKs, REST APIs, are all
encrypted.
24-hour threat management protects the infrastructure and platform against malware, distributed denial-of-
service (DDoS), man-in-the-middle (MITM), and other threats.
For more information on infrastructure and platform security in Azure, see Azure Trust Center.
For a set of security recommendations that follow the Azure Security Benchmark, see Azure Security Baseline for
Azure Functions.
Secure operation
This section guides you on configuring and running your function app as securely as possible.
Defender for Cloud
Defender for Cloud integrates with your function app in the portal. It provides, for free, a quick assessment of
potential configuration-related security vulnerabilities. Function apps running in a dedicated plan can also use
Defender for Cloud's enhanced security features for an additional cost. To learn more, see Protect your Azure
App Service web apps and APIs.
Log and monitor
One way to detect attacks is through activity monitoring and logging analytics. Functions integrates with
Application Insights to collects log, performance, and error data for your function app. Application Insights
automatically detects performance anomalies and includes powerful analytics tools to help you diagnose issues
and to understand how your functions are used. To learn more, see Monitor Azure Functions.
Functions also integrates with Azure Monitor Logs to enable you to consolidate function app logs with system
events for easier analysis. You can use diagnostic settings to configure streaming export of platform logs and
metrics for your functions to the destination of your choice, such as a Logs Analytics workspace. To learn more,
see Monitoring Azure Functions with Azure Monitor Logs.
For enterprise-level threat detection and response automation, stream your logs and events to a Logs Analytics
workspace. You can then connect Microsoft Sentinel to this workspace. To learn more, see What is Microsoft
Sentinel.
For more security recommendations for observability, see the Azure security baseline for Azure Functions.
Require HTTPS
By default, clients can connect to function endpoints by using both HTTP or HTTPS. You should redirect HTTP to
HTTPs because HTTPS uses the SSL/TLS protocol to provide a secure connection, which is both encrypted and
authenticated. To learn how, see Enforce HTTPS.
When you require HTTPS, you should also Require the latest TLS version. To learn how, see Enforce TLS versions.
For more information, see Secure connections (TLS).
Function access keys
Functions lets you use keys to make it harder to access your HTTP function endpoints during development.
Unless the HTTP access level on an HTTP triggered function is set to anonymous , requests must include an API
access key in the request.
While keys provide a default security mechanism, you may want to consider additional options to secure an
HTTP endpoint in production. For example, it's generally not a good practice to distribute shared secret in public
apps. If your function is being called from a public client, you may want to consider implementing another
security mechanism. To learn more, see Secure an HTTP endpoint in production.
When you renew your function key values, you must manually redistribute the updated key values to all clients
that call your function.
Authorization scopes (function-level)
There are two access scopes for function-level keys:
Function : These keys apply only to the specific functions under which they are defined. When used as an
API key, these only allow access to that function.
Host : Keys with a host scope can be used to access all functions within the function app. When used as an
API key, these allow access to any function within the function app.
Each key is named for reference, and there is a default key (named "default") at the function and host level.
Function keys take precedence over host keys. When two keys are defined with the same name, the function key
is always used.
Master key (admin-level)
Each function app also has an admin-level host key named _master . In addition to providing host-level access
to all functions in the app, the master key also provides administrative access to the runtime REST APIs. This key
cannot be revoked. When you set an access level of admin , requests must use the master key; any other key
results in access failure.
Cau t i on
Due to the elevated permissions in your function app granted by the master key, you should not share this key
with third parties or distribute it in native client applications. Use caution when choosing the admin access level.
System key
Specific extensions may require a system-managed key to access webhook endpoints. System keys are designed
for extension-specific function endpoints that called by internal components. For example, the Event Grid trigger
requires that the subscription use a system key when calling the trigger endpoint. Durable Functions also uses
system keys to call Durable Task extension APIs.
The scope of system keys is determined by the extension, but it generally applies to the entire function app.
System keys can only be created by specific extensions, and you can't explicitly set their values. Like other keys,
you can generate a new value for the key from the portal or by using the key APIs.
Keys comparison
The following table compares the uses for various kinds of access keys:
A C T IO N SC O P E VA L ID K EY S
To learn more about access keys, see the HTTP trigger binding article.
Secret repositories
By default, keys are stored in a Blob storage container in the account provided by the AzureWebJobsStorage
setting. You can use the AzureWebJobsSecretStorageType setting to override this behavior and store keys in a
different location.
LO C AT IO N VA L UE DESC RIP T IO N
When using Key Vault for key storage, the app settings you need depend on the managed identity type.
Functions runtime version 3.x only supports system-assigned managed identities.
Version 4.x
Version 3.x
SET T IN G N A M E SY ST EM - A SSIGN ED USER- A SSIGN ED A P P REGIST RAT IO N
AzureWebJobsSecretStorag ✓ ✓ ✓
eKeyVaultUri
AzureWebJobsSecretStorag X ✓ ✓
eKeyVaultClientId
AzureWebJobsSecretStorag X X ✓
eKeyVaultClientSecret
AzureWebJobsSecretStorag X X ✓
eKeyVaultTenantId
Authentication/authorization
While function keys can provide some mitigation for unwanted access, the only way to truly secure your
function endpoints is by implementing positive authentication of clients accessing your functions. You can then
make authorization decisions based on identity.
Enable App Service Authentication/Authorization
The App Service platform lets you use Azure Active Directory (AAD) and several third-party identity providers to
authenticate clients. You can use this strategy to implement custom authorization rules for your functions, and
you can work with user information from your function code. To learn more, see Authentication and
authorization in Azure App Service and Working with client identities.
Use Azure API Management (APIM) to authenticate requests
APIM provides a variety of API security options for incoming requests. To learn more, see API Management
authentication policies. With APIM in place, you can configure your function app to accept requests only from
the IP address of your APIM instance. To learn more, see IP address restrictions.
Permissions
As with any application or service, the goal is run your function app with the lowest possible permissions.
User management permissions
Functions supports built-in Azure role-based access control (Azure RBAC). Azure roles supported by Functions
are Contributor, Owner, and Reader.
Permissions are effective at the function app level. The Contributor role is required to perform most function
app-level tasks. You also need the Contributor role along with the Monitoring Reader permission to be able to
view log data in Application Insights. Only the Owner role can delete a function app.
Organize functions by privilege
Connection strings and other credentials stored in application settings gives all of the functions in the function
app the same set of permissions in the associated resource. Consider minimizing the number of functions with
access to specific credentials by moving functions that don't use those credentials to a separate function app.
You can always use techniques such as function chaining to pass data between functions in different function
apps.
Managed identities
A managed identity from Azure Active Directory (Azure AD) allows your app to easily access other Azure AD-
protected resources such as Azure Key Vault. The identity is managed by the Azure platform and does not
require you to provision or rotate any secrets. For more about managed identities in Azure AD, see Managed
identities for Azure resources.
Your application can be granted two types of identities:
A system-assigned identity is tied to your application and is deleted if your app is deleted. An app can
only have one system-assigned identity.
A user-assigned identity is a standalone Azure resource that can be assigned to your app. An app can have
multiple user-assigned identities.
Managed identities can be used in place of secrets for connections from some triggers and bindings. See
Identity-based connections.
For more information, see How to use managed identities for App Service and Azure Functions.
Restrict CORS access
Cross-origin resource sharing (CORS) is a way to allow web apps running in another domain to make requests
to your HTTP trigger endpoints. App Service provides built-in support for handing the required CORS headers in
HTTP requests. CORS rules are defined on a function app level.
While it's tempting to use a wildcard that allows all sites to access your endpoint. But, this defeats the purpose of
CORS, which is to help prevent cross-site scripting attacks. Instead, add a separate CORS entry for the domain of
each web app that must access your endpoint.
Managing secrets
To be able to connect to the various services and resources need to run your code, function apps need to be able
to access secrets, such as connection strings and service keys. This section describes how to store secrets
required by your functions.
Never store secrets in your function code.
Application settings
By default, you store connection strings and secrets used by your function app and bindings as application
settings. This makes these credentials available to both your function code and the various bindings used by the
function. The application setting (key) name is used to retrieve the actual value, which is the secret.
For example, every function app requires an associated storage account, which is used by the runtime. By
default, the connection to this storage account is stored in an application setting named AzureWebJobsStorage .
App settings and connection strings are stored encrypted in Azure. They're decrypted only before being injected
into your app's process memory when the app starts. The encryption keys are rotated regularly. If you prefer to
instead manage the secure storage of your secrets, the app setting should instead be references to Azure Key
Vault.
You can also encrypt settings by default in the local.settings.json file when developing functions on your local
computer. To learn more, see the IsEncrypted property in the local settings file.
Key Vault references
While application settings are sufficient for most many functions, you may want to share the same secrets
across multiple services. In this case, redundant storage of secrets results in more potential vulnerabilities. A
more secure approach is to a central secret storage service and use references to this service instead of the
secrets themselves.
Azure Key Vault is a service that provides centralized secrets management, with full control over access policies
and audit history. You can use a Key Vault reference in the place of a connection string or key in your application
settings. To learn more, see Use Key Vault references for App Service and Azure Functions.
Identity-based connections
Identities may be used in place of secrets for connecting to some resources. This has the advantage of not
requiring the management of a secret, and it provides more fine-grained access control and auditing.
When you are writing code that creates the connection to Azure services that support Azure AD authentication,
you can choose to use an identity instead of a secret or connection string. Details for both connection methods
are covered in the documentation for each service.
Some Azure Functions trigger and binding extensions may be configured using an identity-based connection.
Today, this includes the Azure Blob and Azure Queue extensions. For information about how to configure these
extensions to use an identity, see How to use identity-based connections in Azure Functions.
Set usage quotas
Consider setting a usage quota on functions running in a Consumption plan. When you set a daily GB-sec limit
on the sum total execution of functions in your function app, execution is stopped when the limit is reached. This
could potentially help mitigate against malicious code executing your functions. To learn how to estimate
consumption for your functions, see Estimating Consumption plan costs.
Data validation
The triggers and bindings used by your functions don't provide any additional data validation. Your code must
validate any data received from a trigger or input binding. If an upstream service is compromised, you don't
want unvalidated inputs flowing through your functions. For example, if your function stores data from an Azure
Storage queue in a relational database, you must validate the data and parameterize your commands to avoid
SQL injection attacks.
Don't assume that the data coming into your function has already been validated or sanitized. It's also a good
idea to verify that the data being written to output bindings is valid.
Handle errors
While it seems basic, it's important to write good error handling in your functions. Unhandled errors bubble-up
to the host and are handled by the runtime. Different bindings handle processing of errors differently. To learn
more, see Azure Functions error handling.
Disable remote debugging
Make sure that remote debugging is disabled, except when you are actively debugging your functions. You can
disable remote debugging in the General Settings tab of your function app Configuration in the portal.
Restrict CORS access
Azure Functions supports cross-origin resource sharing (CORS). CORS is configured in the portal and through
the Azure CLI. The CORS allowed origins list applies at the function app level. With CORS enabled, responses
include the Access-Control-Allow-Origin header. For more information, see Cross-origin resource sharing.
Don't use wildcards in your allowed origins list. Instead, list the specific domains from which you expect to get
requests.
Store data encrypted
Azure Storage encrypts all data in a storage account at rest. For more information, see Azure Storage encryption
for data at rest.
By default, data is encrypted with Microsoft-managed keys. For additional control over encryption keys, you can
supply customer-managed keys to use for encryption of blob and file data. These keys must be present in Azure
Key Vault for Functions to be able to access the storage account. To learn more, see Encryption at rest using
customer-managed keys.
Secure deployment
Azure Functions tooling an integration make it easy to publish local function project code to Azure. It's
important to understand how deployment works when considering security for an Azure Functions topology.
Deployment credentials
App Service deployments require a set of deployment credentials. These deployment credentials are used to
secure your function app deployments. Deployment credentials are managed by the App Service platform and
are encrypted at rest.
There are two kinds of deployment credentials:
User-level credentials : one set of credentials for the entire Azure account. It can be used to deploy to
App Service for any app, in any subscription, that the Azure account has permission to access. It's the
default set that's surfaced in the portal GUI (such as the Over view and Proper ties of the app's resource
page). When a user is granted app access via Role-Based Access Control (RBAC) or coadmin permissions,
that user can use their own user-level credentials until the access is revoked. Do not share these
credentials with other Azure users.
App-level credentials : one set of credentials for each app. It can be used to deploy to that app only. The
credentials for each app are generated automatically at app creation. They can't be configured manually,
but can be reset anytime. For a user to be granted access to app-level credentials via (RBAC), that user
must be contributor or higher on the app (including Website Contributor built-in role). Readers are not
allowed to publish, and can't access those credentials.
At this time, Key Vault isn't supported for deployment credentials. To learn more about managing deployment
credentials, see Configure deployment credentials for Azure App Service.
Disable FTP
By default, each function app has an FTP endpoint enabled. The FTP endpoint is accessed using deployment
credentials.
FTP isn't recommended for deploying your function code. FTP deployments are manual, and they require you to
synchronize triggers. To learn more, see FTP deployment.
When you're not planning on using FTP, you should disable it in the portal. If you do choose to use FTP, you
should enforce FTPS.
Secure the scm endpoint
Every function app has a corresponding scm service endpoint that used by the Advanced Tools (Kudu) service
for deployments and other App Service site extensions. The scm endpoint for a function app is always a URL in
the form https://<FUNCTION_APP_NAME.scm.azurewebsites.net> . When you use network isolation to secure your
functions, you must also account for this endpoint.
By having a separate scm endpoint, you can control deployments and other advanced tools functionalities for
function app that are isolated or running in a virtual network. The scm endpoint supports both basic
authentication (using deployment credentials) and single sign-on with your Azure portal credentials. To learn
more, see Accessing the Kudu service.
Continuous security validation
Since security needs to be considered a every step in the development process, it make sense to also implement
security validations in a continuous deployment environment. This is sometimes called DevSecOps. Using Azure
DevOps for your deployment pipeline let's you integrate validation into the deployment process. For more
information, see Learn how to add continuous security validation to your CI/CD pipeline.
Network security
Restricting network access to your function app lets you control who can access your functions endpoints.
Functions leverages App Service infrastructure to enable your functions to access resources without using
internet-routable addresses or to restrict internet access to a function endpoint. To learn more about these
networking options, see Azure Functions networking options.
Set access restrictions
Access restrictions allow you to define lists of allow/deny rules to control traffic to your app. Rules are evaluated
in priority order. If there are no rules defined, then your app will accept traffic from any address. To learn more,
see Azure App Service Access Restrictions.
Private site access
Azure Private Endpoint is a network interface that connects you privately and securely to a service powered by
Azure Private Link. Private Endpoint uses a private IP address from your virtual network, effectively bringing the
service into your virtual network.
You can use Private Endpoint for your functions hosted in the Premium and App Service plans.
If you want to make calls to Private Endpoints, then you must make sure that your DNS lookups resolve to the
private endpoint. You can enforce this behavior in one of the following ways:
Integrate with Azure DNS private zones. When your virtual network doesn't have a custom DNS server, this is
done automatically.
Manage the private endpoint in the DNS server used by your app. To do this you must know the private
endpoint address and then point the endpoint you are trying to reach to that address using an A record.
Configure your own DNS server to forward to Azure DNS private zones.
To learn more, see using Private Endpoints for Web Apps.
Deploy your function app in isolation
Azure App Service Environment (ASE) provides a dedicated hosting environment in which to run your functions.
ASE lets you configure a single front-end gateway that you can use to authenticate all incoming requests. For
more information, see Configuring a Web Application Firewall (WAF) for App Service Environment.
Use a gateway service
Gateway services, such as Azure Application Gateway and Azure Front Door let you set up a Web Application
Firewall (WAF). WAF rules are used to monitor or block detected attacks, which provide an extra layer of
protection for your functions. To set up a WAF, your function app needs to be running in an ASE or using Private
Endpoints (preview). To learn more, see Using Private Endpoints.
Next steps
Azure Security Baseline for Azure Functions
Azure Functions diagnostics
Azure Functions runtime versions overview
8/2/2022 • 23 minutes to read • Edit Online
Azure Functions currently supports several versions of the runtime host. The following table details the available
versions, their support level, and when they should be used:
IMPORTANT
Beginning on December 3, 2022, function apps running on versions 2.x and 3.x of the Azure Functions runtime can no
longer be supported. Before that time, please test, verify, and migrate your function apps to version 4.x of the Functions
runtime. For more information, see Migrating from 3.x to 4.x.
End of support for these runtime versions is due to the ending of support for .NET Core 3.1, which is required by these
older runtime versions. This requirement affects all Azure Functions runtime languages.
Functions version 1.x is still supported for C# function apps that require the .NET Framework. Preview support is now
available in Functions 4.x to run C# functions on .NET Framework 4.8.
This article details some of the differences between these versions, how you can create each version, and how to
change the version on which your functions run.
Levels of support
There are two levels of support:
Generally available (GA) - Fully supported and approved for production use.
Preview - Not yet supported, but expected to reach GA status in the future.
Languages
All functions in a function app must share the same language. You chose the language of functions in your
function app when you create the app. The language of your function app is maintained in the
FUNCTIONS_WORKER_RUNTIME setting, and shouldn't be changed when there are existing functions.
The following table indicates which programming languages are currently supported in each runtime version.
L A N GUA GE 1. X 2. X 3. X 4. X
C# GA (.NET Framework GA (.NET Core 2.11 ) GA (.NET Core 3.1) GA (.NET 6.0)
4.8) GA (.NET 5.0) Preview (.NET 7)
Preview (.NET
Framework 4.8)
F# GA (.NET Framework GA (.NET Core 2.11 ) GA (.NET Core 3.1) GA (.NET 6.0)
4.8)
Python N/A GA (Python 3.7 & GA (Python 3.9, 3.8, GA (Python 3.9, 3.8,
3.6) 3.7, & 3.6) 3.7)
TypeScript2 N/A GA GA GA
1 .NET class library apps targeting runtime version 2.x runs on .NET Core 3.1 in .NET Core 2.x compatibility
mode. To learn more, see Functions v2.x considerations.
2 Supported through transpiling to JavaScript.
See the language-specific developer guide article for more details about supported language versions.
For information about planned changes to language support, see Azure roadmap.
VA L UE RUN T IM E TA RGET
~4 4.x
~3 3.x
~2 2.x
~1 1.x
IMPORTANT
Don't arbitrarily change this app setting, because other app setting changes and changes to your function code may be
required. You should instead change this setting in the Function runtime settings tab of the function app
Configuration in the Azure portal when you are ready to make a major version upgrade.
There's technically not a correlation between extension bundle versions and the Functions runtime version.
However, starting with version 4.x the Functions runtime enforces a minimum version for extension bundles.
If you receive a warning about your extension bundle version not meeting a minimum required version, update
your existing extension bundle reference in the host.json as follows:
{
"version": "2.0",
"extensionBundle": {
"id": "Microsoft.Azure.Functions.ExtensionBundle",
"version": "[2.*, 3.0.0)"
}
}
Azure CLI
Azure PowerShell
When running on Windows, you also need to enable .NET 6.0, which is required by version 4.x of the runtime.
Azure CLI
Azure PowerShell
In these examples, replace <APP_NAME> with the name of your function app and <RESOURCE_GROUP_NAME> with the
name of the resource group.
Migrate using slots
Using deployment slots is a good way to migrate your function app to the v4.x runtime from a previous version.
By using a staging slot, you can run your app on the new runtime version in the staging slot and switch to
production after verification. Slots also provide a way to minimize downtime during upgrade. If you need to
minimize downtime, follow the steps in Minimum downtime upgrade.
After you've verified your app in the upgraded slot, you can swap the app and new version settings into
production. This swap requires setting WEBSITE_OVERRIDE_STICKY_EXTENSION_VERSIONS=0 in the production slot.
How you add this setting affects the amount of downtime required for the upgrade.
Standard upgrade
If your slot-enabled function app can handle the downtime of a full restart, you can update the
WEBSITE_OVERRIDE_STICKY_EXTENSION_VERSIONS setting directly in the production slot. Because changing this setting
directly in the production slot causes a restart that impacts availability, consider doing this change at a time of
reduced traffic. You can then swap in the upgraded version from the staging slot.
The Update-AzFunctionAppSetting PowerShell cmdlet doesn't currently support slots. You must use Azure CLI or
the Azure portal.
1. Use the following command to set WEBSITE_OVERRIDE_STICKY_EXTENSION_VERSIONS=0 in the production slot:
This command causes the app running in the production slot to restart.
2. Use the following command to also set WEBSITE_OVERRIDE_STICKY_EXTENSION_VERSIONS in the staging slot:
4. (Windows only) For function apps running on Windows, use the following command so that the runtime
can run on .NET 6:
Version 4.x of the Functions runtime requires .NET 6 when running on Windows.
5. If your code project required any updates to run on version 4.x, deploy those updates to the staging slot
now.
6. Confirm that your function app runs correctly in the upgraded staging environment before swapping.
7. Use the following command to swap the upgraded staging slot to production:
2. Use the following commands to swap the slot with the new setting into production, and at the same time
restore the version setting in the staging slot.
You may see errors from the staging slot during the time between the swap and the runtime version
being restored on staging. This can happen because having
WEBSITE_OVERRIDE_STICKY_EXTENSION_VERSIONS=0 only in staging during a swap removes the
FUNCTIONS_EXTENSION_VERSION setting in staging. Without the version setting, your slot is in a bad state.
Updating the version in the staging slot right after the swap should put the slot back into a good state,
and you call roll back your changes if needed. However, any rollback of the swap also requires you to
directly remove WEBSITE_OVERRIDE_STICKY_EXTENSION_VERSIONS=0 from production before the swap back to
prevent the same errors in production seen in staging. This change in the production setting would then
cause a restart.
3. Use the following command to again set WEBSITE_OVERRIDE_STICKY_EXTENSION_VERSIONS=0 in the staging
slot:
5. (Windows only) For function apps running on Windows, use the following command so that the runtime
can run on .NET 6:
Version 4.x of the Functions runtime requires .NET 6 when running on Windows.
6. If your code project required any updates to run on version 4.x, deploy those updates to the staging slot
now.
7. Confirm that your function app runs correctly in the upgraded staging environment before swapping.
8. Use the following command to swap the upgraded and prewarmed staging slot to production:
<TargetFramework>net6.0</TargetFramework>
<AzureFunctionsVersion>v4</AzureFunctionsVersion>
3. Update the NuGet packages referenced by your app to the latest versions. For more information, see
breaking changes.
Specific packages depend on whether your functions run in-process or out-of-process.
In-process
Isolated process
NOTE
Due to support issues with .NET Core 2.2, function apps pinned to version 2 ( ~2 ) are essentially running on .NET Core
3.1. To learn more, see Functions v2.x compatibility mode.
Output bindings assigned through 1.x context.done or return values now behave the same as setting in
2.x+ context.bindings .
Timer trigger object is camelCase instead of PascalCase
Event hub triggered functions with dataType binary will receive an array of binary instead of string .
The HTTP request payload can no longer be accessed via context.bindingData.req . It can still be accessed
as an input parameter, context.req , and in context.bindings .
Node.js 8 is no longer supported and won't execute in 3.x functions.
Version 4.x
Version 3.x
Version 2.x
Version 1.x
<TargetFramework>net6.0</TargetFramework>
<AzureFunctionsVersion>v4</AzureFunctionsVersion>
You can also choose net6.0 , net7.0 , or net48 as the target framework if you are using .NET isolated process
functions. Support for net7.0 and net48 is currently in preview.
NOTE
Azure Functions 4.x requires the Microsoft.NET.Sdk.Functions extension be at least 4.0.0 .
Up d a t i n g 2.x a p p s t o 3.x i n V i s u a l St u d i o
You can open an existing function targeting 2.x and move to 3.x by editing the .csproj file and updating the
values above. Visual Studio manages runtime versions automatically for you based on project metadata.
However, it's possible if you've never created a 3.x app before that Visual Studio doesn't yet have the templates
and runtime for 3.x on your machine. This issue may present itself with an error like "no Functions runtime
available that matches the version specified in the project." To fetch the latest templates and runtime, go through
the experience to create a new function project. When you get to the version and template select screen, wait for
Visual Studio to complete fetching the latest templates. After the latest .NET Core 3 templates are available and
displayed, you can run and debug any project configured for version 3.x.
IMPORTANT
Version 3.x functions can only be developed in Visual Studio if using Visual Studio version 16.4 or newer.
Bindings
Starting with version 2.x, the runtime uses a new binding extensibility model that offers these advantages:
Support for third-party binding extensions.
Decoupling of runtime and bindings. This change allows binding extensions to be versioned and released
independently. You can, for example, opt to upgrade to a version of an extension that relies on a newer
version of an underlying SDK.
A lighter execution environment, where only the bindings in use are known and loaded by the runtime.
Except for HTTP and timer triggers, all bindings must be explicitly added to the function app project, or
registered in the portal. For more information, see Register binding extensions.
The following table shows which bindings are supported in each runtime version.
This table shows the bindings that are supported in the major versions of the Azure Functions runtime:
2. X A N D
TYPE 1. X H IGH ER 1 T RIGGER IN P UT O UT P UT
Blob storage ✔ ✔ ✔ ✔ ✔
Azure Cosmos ✔ ✔ ✔ ✔ ✔
DB
Azure SQL ✔ ✔ ✔
(preview)
Dapr3 ✔ ✔ ✔ ✔
Event Grid ✔ ✔ ✔ ✔
Event Hubs ✔ ✔ ✔ ✔
HTTP & ✔ ✔ ✔ ✔
webhooks
IoT Hub ✔ ✔ ✔
Kafka2 ✔ ✔ ✔
2. X A N D
TYPE 1. X H IGH ER T RIGGER IN P UT O UT P UT
Mobile Apps ✔ ✔ ✔
Notification ✔ ✔
Hubs
Queue storage ✔ ✔ ✔ ✔
RabbitMQ2 ✔ ✔ ✔
SendGrid ✔ ✔ ✔
Service Bus ✔ ✔ ✔ ✔
SignalR ✔ ✔ ✔ ✔
Table storage ✔ ✔ ✔ ✔
Timer ✔ ✔ ✔
Twilio ✔ ✔ ✔
1 Starting with the version 2.x runtime, all bindings except HTTP and Timer must be registered. See Register
binding extensions.
2 Triggers aren't supported in the Consumption plan. Requires runtime-driven triggers.
3 Supported only in Kubernetes, IoT Edge, and other self-hosted modes only.
Consumption plan 5 10
1 Regardless of the function app timeout setting, 230 seconds is the maximum amount of time that an HTTP
triggered function can take to respond to a request. This is because of the default idle timeout of Azure Load
Balancer. For longer processing times, consider using the Durable Functions async pattern or defer the actual
work and return an immediate response.
2 The default timeout for version 1.x of the Functions runtime is unlimited.
Next steps
For more information, see the following resources:
Code and test Azure Functions locally
How to target Azure Functions runtime versions
Release notes
Azure Functions Consumption plan hosting
8/2/2022 • 2 minutes to read • Edit Online
When you're using the Consumption plan, instances of the Azure Functions host are dynamically added and
removed based on the number of incoming events. The Consumption plan is the fully serverless hosting option
for Azure Functions.
Benefits
The Consumption plan scales automatically, even during periods of high load. When running functions in a
Consumption plan, you're charged for compute resources only when your functions are running. On a
Consumption plan, a function execution times out after a configurable period of time.
For a comparison of the Consumption plan against the other plan and hosting types, see function scale and
hosting options.
Billing
Billing is based on number of executions, execution time, and memory used. Usage is aggregated across all
functions within a function app. For more information, see the Azure Functions pricing page.
To learn more about how to estimate costs when running in a Consumption plan, see Understanding
Consumption plan costs.
Next steps
Azure Functions hosting options
Event-driven scaling in Azure Functions
Azure Functions Premium plan
8/2/2022 • 11 minutes to read • Edit Online
The Azure Functions Elastic Premium plan is a dynamic scale hosting option for function apps. For other hosting
plan options, see the hosting plan article.
IMPORTANT
Azure Functions runs on the Azure App Service platform. In the App Service platform, plans that host Premium plan
function apps are referred to as Elastic Premium plans, with SKU names like EP1 . If you choose to run your function app
on a Premium plan, make sure to create a plan with an SKU name that starts with "E", such as EP1 . App Service plan
SKU names that start with "P", such as P1V2 (Premium V2 Small plan), are actually Dedicated hosting plans. Because
they are Dedicated and not Elastic Premium, plans with SKU names starting with "P" won't scale dynamically and may
increase your costs.
Billing
Billing for the Premium plan is based on the number of core seconds and memory allocated across instances.
This billing differs from the Consumption plan, which is billed per execution and memory consumed. There is no
execution charge with the Premium plan. At least one instance must be allocated at all times per plan. This
billing results in a minimum monthly cost per active plan, regardless if the function is active or idle. Keep in
mind that all function apps in a Premium plan share allocated instances. To learn more, see the Azure Functions
pricing page.
NOTE
Every premium plan has at least one active (billed) instance at all times.
Portal
Azure CLI
Azure PowerShell
You can configure the number of always ready instances in the Azure portal by selected your Function App ,
going to the Platform Features tab, and selecting the Scale Out options. In the function app edit window,
always ready instances are specific to that app.
You can configure the number of pre-warmed instances in the Azure portal by selecting the Scale Out options
under Settings of a function app deployed to that plan and then adjusting the Always Ready Instances count.
Migration
If you have an existing function app, you can use Azure CLI commands to migrate your app between a
Consumption plan and a Premium plan on Windows. The specific commands depend on the direction of the
migration. To learn more, see Plan migration.
This migration isn't supported on Linux.
Portal
Azure CLI
Azure PowerShell
You can configure the plan size and maximums in the Azure portal by selecting the Scale Out options under
Settings of a function app deployed to that plan.
The minimum for every plan will be at least one instance. The actual minimum number of instances will be
autoconfigured for you based on the always ready instances requested by apps in the plan. For example, if app A
requests five always ready instances, and app B requests two always ready instances in the same plan, the
minimum plan size will be calculated as five. App A will be running on all 5, and app B will only be running on 2.
IMPORTANT
You are charged for each instance allocated in the minimum instance count regardless if functions are executing or not.
In most circumstances, this autocalculated minimum is sufficient. However, scaling beyond the minimum occurs
at a best effort. It's possible, though unlikely, that at a specific time scale-out could be delayed if additional
instances are unavailable. By setting a minimum higher than the autocalculated minimum, you reserve instances
in advance of scale-out.
Portal
Azure CLI
Azure PowerShell
You can configure the minimum instances in the Azure portal by selecting the Scale Out options under
Settings of a function app deployed to that plan.
Available instance SKUs
When creating or scaling your plan, you can choose between three instance sizes. You will be billed for the total
number of cores and memory provisioned, per second that each instance is allocated to you. Your app can
automatically scale out to multiple instances as needed.
SK U C O RES M EM O RY STO RA GE
And for plans with more than 4GB memory, ensure the Bitness Platform Setting is set to 64 Bit under General
Settings.
REGIO N W IN DO W S L IN UX
Central US 100 40
East US 100 60
East US 2 100 40
UK South 100 20
UK West 100 20
West US 100 20
West US 2 100 20
West US 3 100 20
Next steps
Understand Azure Functions hosting options
Dedicated hosting plans for Azure Functions
8/2/2022 • 2 minutes to read • Edit Online
This article is about hosting your function app in an App Service plan, including in an App Service Environment
(ASE). For other hosting options, see the hosting plan article.
An App Service plan defines a set of compute resources for an app to run. These compute resources are
analogous to the server farm in conventional hosting. One or more function apps can be configured to run on
the same computing resources (App Service plan) as other App Service apps, such as web apps. These plans
include Basic, Standard, Premium, and Isolated SKUs. For details about how the App Service plan works, see the
Azure App Service plans in-depth overview.
Consider an App Service plan in the following situations:
You have existing, underutilized VMs that are already running other App Service instances.
You want to provide a custom image on which to run your functions.
Billing
You pay for function apps in an App Service Plan as you would for other App Service resources. This differs from
Azure Functions Consumption plan or Premium plan hosting, which have consumption-based cost components.
You are billed only for the plan, regardless of how many function apps or web apps run in the plan. To learn
more, see the App Service pricing page.
Always On
If you run on an App Service plan, you should enable the Always on setting so that your function app runs
correctly. On an App Service plan, the functions runtime goes idle after a few minutes of inactivity, so only HTTP
triggers will "wake up" your functions. The Always on setting is available only on an App Service plan. On a
Consumption plan, the platform activates function apps automatically.
Even with Always On enabled, the execution timeout for individual functions is controlled by the
functionTimeout setting in the host.json project file.
Scaling
Using an App Service plan, you can manually scale out by adding more VM instances. You can also enable
autoscale, though autoscale will be slower than the elastic scale of the Premium plan. For more information, see
Scale instance count manually or automatically. You can also scale up by choosing a different App Service plan.
For more information, see Scale up an app in Azure.
NOTE
When running JavaScript (Node.js) functions on an App Service plan, you should choose a plan that has fewer vCPUs. For
more information, see Choose single-core App Service plans.
Next steps
Azure Functions hosting options
Azure App Service plan overview
Deployment technologies in Azure Functions
8/2/2022 • 10 minutes to read • Edit Online
You can use a few different technologies to deploy your Azure Functions project code to Azure. This article
provides an overview of the deployment methods available to you and recommendations for the best method
to use in various scenarios. It also provides an exhaustive list of and key details about the underlying
deployment technologies.
Deployment methods
The deployment technology you use to publish code to Azure is generally determined by the way in which you
publish your app. The appropriate deployment method is determined by specific needs and the point in the
development cycle. For example, during development and testing you may deploy directly from your
development tool, such as Visual Studio Code. When your app is in production, you are more likely to publish
continuously from source control or by using an automated publishing pipeline, which includes additional
validation and testing.
The following table describes the available deployment methods for your Function project.
DEP LO Y M EN T T Y P E M ET H O DS B EST F O R. . .
While specific Functions deployments use the best technology based on their context, most deployment
methods are based on zip deployment.
DEP LO Y M EN T W IN DO W S L IN UX
T EC H N O LO G C O N SUM P T I W IN DO W S W IN DO W S C O N SUM P T I L IN UX L IN UX
Y ON P REM IUM DEDIC AT ED ON P REM IUM DEDIC AT ED
External ✔ ✔ ✔ ✔ ✔ ✔
package URL1
Zip deploy ✔ ✔ ✔ ✔ ✔ ✔
Docker ✔ ✔
container
Web Deploy ✔ ✔ ✔
Source ✔ ✔ ✔ ✔ ✔
control
Local Git1 ✔ ✔ ✔ ✔ ✔
Cloud sync1 ✔ ✔ ✔ ✔ ✔
FTP1 ✔ ✔ ✔ ✔ ✔
DEP LO Y M EN T W IN DO W S L IN UX
T EC H N O LO G C O N SUM P T I W IN DO W S W IN DO W S C O N SUM P T I L IN UX L IN UX
Y ON P REM IUM DEDIC AT ED ON P REM IUM DEDIC AT ED
Portal editing ✔ ✔ ✔ ✔2 ✔2
1 Deployment technology that requires manual trigger syncing. 2 Portal editing is enabled only for HTTP and
Timer triggers for Functions on Linux using Premium and Dedicated plans.
Key concepts
Some key concepts are critical to understanding how deployments work in Azure Functions.
Trigger syncing
When you change any of your triggers, the Functions infrastructure must be aware of the changes.
Synchronization happens automatically for many deployment technologies. However, in some cases, you must
manually sync your triggers. When you deploy your updates by referencing an external package URL, local Git,
cloud sync, or FTP, you must manually sync your triggers. You can sync triggers in one of three ways:
Restart your function app in the Azure portal.
Send an HTTP POST request to
https://{functionappname}.azurewebsites.net/admin/host/synctriggers?code=<API_KEY> using the master key.
Send an HTTP POST request to
https://management.azure.com/subscriptions/<SUBSCRIPTION_ID>/resourceGroups/<RESOURCE_GROUP_NAME>/providers/Microsoft.Web/sites/<FUNCTION_APP_NAME>/syncfu
api-version=2016-08-01
. Replace the placeholders with your subscription ID, resource group name, and the name of your function
app.
When you deploy using an external package URL and the contents of the package change but the URL itself
doesn't change, you need to manually restart your function app to fully sync your updates.
Remote build
Azure Functions can automatically perform builds on the code it receives after zip deployments. These builds
behave slightly differently depending on whether your app is running on Windows or Linux. Remote builds are
not performed when an app has previously been set to run in Run From Package mode. To learn how to use
remote build, navigate to zip deploy.
NOTE
If you're having issues with remote build, it might be because your app was created before the feature was made available
(August 1, 2019). Try creating a new function app, or running
az functionapp update -g <RESOURCE_GROUP_NAME> -n <APP_NAME> to update your function app. This command might
take two tries to succeed.
By default, both Azure Functions Core Tools and the Azure Functions Extension for Visual Studio Code perform
remote builds when deploying to Linux. Because of this, both tools automatically create these settings for you in
Azure.
When apps are built remotely on Linux, they run from the deployment package.
C o n su m p t i o n p l a n
Linux function apps running in the Consumption plan don't have an SCM/Kudu site, which limits the
deployment options. However, function apps on Linux running in the Consumption plan do support remote
builds.
Dedi c at ed an d Pr em i u m pl an s
Function apps running on Linux in the Dedicated (App Service) plan and the Premium plan also have a limited
SCM/Kudu site.
When to use it: External package URL is the only supported deployment method for Azure Functions
running on Linux in the Consumption plan, if the user doesn't want a remote build to occur. When you
update the package file that a function app references, you must manually sync triggers to tell Azure that
your application has changed. When you change the contents of the package file and not the URL itself, you
must also restart your function app manually.
Zip deploy
Use zip deploy to push a .zip file that contains your function app to Azure. Optionally, you can set your app to
start running from package, or specify that a remote build occurs.
How to use it: Deploy by using your favorite client tool: Visual Studio Code, Visual Studio, or from the
command line using the Azure Functions Core Tools. By default, these tools use zip deployment and run
from package. Core Tools and the Visual Studio Code extension both enable remote build when deploying to
Linux. To manually deploy a .zip file to your function app, follow the instructions in Deploy from a .zip file or
URL.
When you deploy by using zip deploy, you can set your app to run from package. To run from package, set
the WEBSITE_RUN_FROM_PACKAGE application setting value to 1 . We recommend zip deployment. It yields
faster loading times for your applications, and it's the default for VS Code, Visual Studio, and the Azure CLI.
When to use it: Zip deploy is the recommended deployment technology for Azure Functions.
Docker container
You can deploy a Linux container image that contains your function app.
How to use it: Create a Linux function app in the Premium or Dedicated plan and specify which container
image to run from. You can do this in two ways:
Create a Linux function app on an Azure App Service plan in the Azure portal. For Publish , select
Docker Image , and then configure the container. Enter the location where the image is hosted.
Create a Linux function app on an App Service plan by using the Azure CLI. To learn how, see Create a
function on Linux by using a custom image.
To deploy to a Kubernetes cluster as a custom container, in Azure Functions Core Tools, use the
func kubernetes deploy command.
When to use it: Use the Docker container option when you need more control over the Linux environment
where your function app runs. This deployment mechanism is available only for Functions running on Linux.
How to use it: Use Visual Studio tools for Azure Functions. Clear the Run from package file
(recommended) check box.
You can also download Web Deploy 3.6 and call MSDeploy.exe directly.
When to use it: Web Deploy is supported and has no issues, but the preferred mechanism is zip deploy
with Run From Package enabled. To learn more, see the Visual Studio development guide.
Source control
Use source control to connect your function app to a Git repository. An update to code in that repository
triggers deployment. For more information, see the Kudu Wiki.
How to use it: Use Deployment Center in the Functions area of the portal to set up publishing from source
control. For more information, see Continuous deployment for Azure Functions.
When to use it: Using source control is the best practice for teams that collaborate on their function apps.
Source control is a good deployment option that enables more sophisticated deployment pipelines.
Local Git
You can use local Git to push code from your local machine to Azure Functions by using Git.
How to use it: Follow the instructions in Local Git deployment to Azure App Service.
When to use it: In general, we recommend that you use a different deployment method. When you publish
from local Git, you must manually sync triggers.
Cloud sync
Use cloud sync to sync your content from Dropbox and OneDrive to Azure Functions.
How to use it: Follow the instructions in Sync content from a cloud folder.
When to use it: In general, we recommend other deployment methods. When you publish by using cloud
sync, you must manually sync triggers.
FTP
You can use FTP to directly transfer files to Azure Functions.
How to use it: Follow the instructions in Deploy content by using FTP/s.
When to use it: In general, we recommend other deployment methods. When you publish by using FTP,
you must manually sync triggers.
Portal editing
In the portal-based editor, you can directly edit the files that are in your function app (essentially deploying
every time you save your changes).
How to use it: To be able to edit your functions in the Azure portal, you must have created your functions
in the portal. To preserve a single source of truth, using any other deployment method makes your function
read-only and prevents continued portal editing. To return to a state in which you can edit your files in the
Azure portal, you can manually turn the edit mode back to Read/Write and remove any deployment-related
application settings (like WEBSITE_RUN_FROM_PACKAGE .
When to use it: The portal is a good way to get started with Azure Functions. For more intense
development work, we recommend that you use one of the following client tools:
Visual Studio Code
Azure Functions Core Tools (command line)
Visual Studio
The following table shows the operating systems and languages that support portal editing:
W IN DO W S L IN UX
C O N SUM P T I W IN DO W S W IN DO W S C O N SUM P T I L IN UX L IN UX
L A N GUA GE ON P REM IUM DEDIC AT ED ON P REM IUM DEDIC AT ED
C#
C# Script ✔ ✔ ✔ ✔* ✔*
F#
Java
JavaScript ✔ ✔ ✔ ✔* ✔*
(Node.js)
Python
PowerShell ✔ ✔ ✔
TypeScript
(Node.js)
* Portal editing is enabled only for HTTP and Timer triggers for Functions on Linux using Premium and
Dedicated plans.
Deployment behaviors
When you deploy updates to your function app code, currently executing functions are terminated. After
deployment completes, the new code is loaded to begin processing requests. Please review Improve the
performance and reliability of Azure Functions to learn how to write stateless and defensive functions.
If you need more control over this transition, you should use deployment slots.
Deployment slots
When you deploy your function app to Azure, you can deploy to a separate deployment slot instead of directly
to production. For more information on deployment slots, see the Azure Functions Deployment Slots
documentation for details.
Next steps
Read these articles to learn more about deploying your function apps:
Continuous deployment for Azure Functions
Continuous delivery by using Azure DevOps
Zip deployments for Azure Functions
Run your Azure Functions from a package file
Automate resource deployment for your function app in Azure Functions
Connect to eventing and messaging services from
Azure Functions
8/2/2022 • 2 minutes to read • Edit Online
As a cloud computing service, Azure Functions is frequently used to move data between various Azure services.
To make it easier for you to connect your code to other services, Functions implements a set of binding
extensions to connect to these services. To learn more, see Azure Functions triggers and bindings concepts.
By definition, Azure Functions executions are stateless. If you need to connect your code to services in a more
stateful way, consider instead using Durable Functions or Azure Logic Apps?.
Triggers and bindings are provided to consuming and emitting data easier. There may be cases where you need
more control over the service connection, or you just feel more comfortable using a client library provided by a
service SDK. In those cases, you can use a client instance from the SDK in your function execution to access the
service as you normally would. When using a client directly, you need to pay attention to the effect of scale and
performance on client connections. To learn more, see the guidance on using static clients.
You can't obtain the client instance used by a service binding from your function execution.
The rest of this article provides specific guidance for integrating your code with the specific Azure services
supported by Functions.
Event Grid
Event Grid is an Azure service that sends HTTP requests to notify you about events that happen in publishers. A
publisher is the service or resource that originates the event. For example, an Azure blob storage account is a
publisher, and a blob upload or deletion is an event. Some Azure services have built-in support for publishing
events to Event Grid.
Event handlers receive and process events. Azure Functions is one of several Azure services that have built-in
support for handling Event Grid events. Functions provides an Event Grid trigger, which invokes a function when
an event is received from Event Grid. A similar output binding can be used to send events from your function to
an Event Grid custom topic.
You can also use an HTTP trigger to handle Event Grid Events. To learn more, see Receive events to an HTTP
endpoint. We recommend using the Event Grid trigger over HTTP trigger.
Azure Functions provides built-in integration with Azure Event Grid by using triggers and bindings.
To learn how to configure and locally evaluate your Event Grid trigger and bindings, see How to work with Event
Grid triggers and bindings in Azure Functions
For more information about Event Grid trigger and output binding definitions and examples, see one of the
following reference articles:
Azure Event Grid bindings for Azure Functions
Azure Event Grid trigger for Azure Functions
Azure Event Grid output binding for Azure Functions
Next steps
To learn more about Event Grid with Functions, see the following articles:
Azure Event Grid bindings for Azure Functions
Tutorial: Automate resizing uploaded images using Event Grid
Event-driven scaling in Azure Functions
8/2/2022 • 7 minutes to read • Edit Online
In the Consumption and Premium plans, Azure Functions scales CPU and memory resources by adding
additional instances of the Functions host. The number of instances is determined on the number of events that
trigger a function.
Each instance of the Functions host in the Consumption plan is limited to 1.5 GB of memory and one CPU. An
instance of the host is the entire function app, meaning all functions within a function app share resource within
an instance and scale at the same time. Function apps that share the same Consumption plan scale
independently. In the Premium plan, the plan size determines the available memory and CPU for all apps in that
plan on that instance.
Function code files are stored on Azure Files shares on the function's main storage account. When you delete the
main storage account of the function app, the function code files are deleted and cannot be recovered.
Runtime scaling
Azure Functions uses a component called the scale controller to monitor the rate of events and determine
whether to scale out or scale in. The scale controller uses heuristics for each trigger type. For example, when
you're using an Azure Queue storage trigger, it scales based on the queue length and the age of the oldest queue
message.
The unit of scale for Azure Functions is the function app. When the function app is scaled out, additional
resources are allocated to run multiple instances of the Azure Functions host. Conversely, as compute demand is
reduced, the scale controller removes function host instances. The number of instances is eventually "scaled in"
to zero when no functions are running within a function app.
Cold Start
After your function app has been idle for a number of minutes, the platform may scale the number of instances
on which your app runs down to zero. The next request has the added latency of scaling from zero to one. This
latency is referred to as a cold start. The number of dependencies required by your function app can impact the
cold start time. Cold start is more of an issue for synchronous operations, such as HTTP triggers that must
return a response. If cold starts are impacting your functions, consider running in a Premium plan or in a
Dedicated plan with the Always on setting enabled.
Scale-in behaviors
Event-driven scaling automatically reduces capacity when demand for your functions is reduced. It does this by
shutting down worker instances of your function app. Before an instance is shut down, new events stop being
sent to the instance. Also, functions that are currently executing are given time to finish executing. This behavior
is logged as drain mode. This shut-down period can extend up to 10 minutes for Consumption plan apps and up
to 60 minutes for Premium plan apps. Event-driven scaling and this behavior don't apply to Dedicated plan apps.
The following considerations apply for scale-in behaviors:
For Consumption plan function apps running on Windows, only apps created after May 2021 have drain
mode behaviors enabled by default.
To enable graceful shutdown for functions using the Service Bus trigger, use version 4.2.0 or a later version of
the Service Bus Extension.
Event Hubs trigger
This section describes how scaling behaves when your function uses an Event Hubs trigger or an IoT Hub
trigger. In these cases, each instance of an event triggered function is backed by a single EventProcessorHost
instance. The trigger (powered by Event Hubs) ensures that only one EventProcessorHost instance can get a
lease on a given partition.
For example, consider an Event Hub as follows:
10 partitions
1,000 events distributed evenly across all partitions, with 100 messages in each partition
When your function is first enabled, there is only one instance of the function. Let's call the first function instance
Function_0 . The Function_0 function has a single instance of EventProcessorHost that holds a lease on all ten
partitions. This instance is reading events from partitions 0-9. From this point forward, one of the following
happens:
New function instances are not needed : Function_0 is able to process all 1,000 events before the
Functions scaling logic take effect. In this case, all 1,000 messages are processed by Function_0 .
An additional function instance is added : If the Functions scaling logic determines that Function_0
has more messages than it can process, a new function app instance ( Function_1 ) is created. This new
function also has an associated instance of EventProcessorHost. As the underlying Event Hubs detect that
a new host instance is trying read messages, it load balances the partitions across the host instances. For
example, partitions 0-4 may be assigned to Function_0 and partitions 5-9 to Function_1 .
N more function instances are added : If the Functions scaling logic determines that both Function_0
and Function_1 have more messages than they can process, new Functions_N function app instances
are created. Apps are created to the point where N is greater than the number of event hub partitions. In
our example, Event Hubs again load balances the partitions, in this case across the instances Function_0 ...
Functions_9 .
As scaling occurs, N instances is a number greater than the number of event hub partitions. This pattern is used
to ensure EventProcessorHost instances are available to obtain locks on partitions as they become available
from other instances. You are only charged for the resources used when the function instance executes. In other
words, you are not charged for this over-provisioning.
When all function execution completes (with or without errors), checkpoints are added to the associated storage
account. When check-pointing succeeds, all 1,000 messages are never retrieved again.
Billing model
Billing for the different plans is described in detail on the Azure Functions pricing page. Usage is aggregated at
the function app level and counts only the time that function code is executed. The following are units for billing:
Resource consumption in gigabyte-seconds (GB-s) . Computed as a combination of memory size and
execution time for all functions within a function app.
Executions . Counted each time a function is executed in response to an event trigger.
Useful queries and information on how to understand your consumption bill can be found on the billing FAQ.
Next steps
Azure Functions hosting options
Azure Functions reliable event processing
8/2/2022 • 6 minutes to read • Edit Online
Event processing is one of the most common scenarios associated with serverless architecture. This article
describes how to create a reliable message processor with Azure Functions to avoid losing messages.
Handling exceptions
As a general rule, every function should include a try/catch block at the highest level of code. Specifically, all
functions that consume Event Hubs events should have a catch block. That way, when an exception is raised,
the catch block handles the error before the pointer progresses.
Retry mechanisms and policies
Some exceptions are transient in nature and don't reappear when an operation is attempted again moments
later. This is why the first step is always to retry the operation. You can leverage the function app retry policies or
author retry logic within the function execution.
Introducing fault-handling behaviors to your functions allow you to define both basic and advanced retry
policies. For instance, you could implement a policy that follows a workflow illustrated by the following rules:
Try to insert a message three times (potentially with a delay between retries).
If the eventual outcome of all retries is a failure, then add a message to a queue so processing can continue
on the stream.
Corrupt or unprocessed messages are then handled later.
NOTE
Polly is an example of a resilience and transient-fault-handling library for C# applications.
Non-exception errors
Some issues arise even when an error is not present. For example, consider a failure that occurs in the middle of
an execution. In this case, if a function doesn’t complete execution, the offset pointer is never progressed. If the
pointer doesn't advance, then any instance that runs after a failed execution continues to read the same
messages. This situation provides an "at-least-once" guarantee.
The assurance that every message is processed at least one time implies that some messages may be processed
more than once. Your function apps need to be aware of this possibility and must be built around the principles
of idempotency.
Resources
Reliable event processing samples
Azure Durable Entity Circuit Breaker
Next steps
For more information, see the following resources:
Azure Functions error handling
Automate resizing uploaded images using Event Grid
Create a function that integrates with Azure Logic Apps
Concurrency in Azure Functions
8/2/2022 • 7 minutes to read • Edit Online
This article describes the concurrency behaviors of event-driven triggers in Azure Functions. It also describes a
new dynamic model for optimizing concurrency behaviors.
The hosting model for Functions allows multiple function invocations to run concurrently on a single compute
instance. For example, consider a case where you have three different functions in your function app, which is
scaled out and running on multiple instances. In this scenario, each function processes invocations on each VM
instance on which your function app is running. The function invocations on a single instance share the same
VM compute resources, such as memory, CPU, and connections. When your app is hosted in a dynamic plan
(Consumption or Premium), the platform scales the number of function app instances up or down based on the
number of incoming events. To learn more, see Event Driven Scaling). When you host your functions in a
Dedicated (App Service) plan, you manually configure your instances or set up an autoscale scheme.
Because multiple function invocations can run on each instance concurrently, each function needs to have a way
to throttle how many concurrent invocations it's processing at any given time.
Static concurrency
Many of the triggers support a host-level static configuration model, which is used to specify per-instance
concurrency for that trigger type. For example, the Service Bus trigger provides both a MaxConcurrentCalls and
a MaxConcurrentSessions setting in the host.json file. These settings together control the maximum number of
messages each function processes concurrently on each instance. Other trigger types have built-in mechanisms
for load-balancing invocations across instances. For example, Event Hubs and Azure Cosmos DB both use a
partition-based scheme.
For trigger types that support concurrency configuration, there's a default behavior, which you can choose to
override in the host.json file for your function app. These settings, which apply to all running instances, allow
you to control the maximum concurrency for your functions on each instance. For example, when your function
is CPU or resource-intensive, you may choose to limit concurrency to keep instances healthy. Similarly, when
your function is making requests to a downstream service that is being throttled, you should also consider
limiting concurrency.
While such concurrency configurations give you control of certain trigger behaviors such as throttling your
functions, it can be difficult to determine the optimal values for these settings. Generally, you have to arrive at
acceptable values via a trial and error process of load testing. Even when you determine a set of values that are
working for a particular load profile, the number of events arriving from your connected services may change
from day to day. This variability means your app often may run with suboptimal values. For example, your
function app may process particularly demanding message payloads on the last day of the week, which requires
you to throttle concurrency down. However, during the rest of the week the message payloads are simpler,
which means you could use a higher concurrency level the rest of the week.
Ideally, we want the system to allow instances to process as much work as they can while keeping each instance
healthy and latencies low, which is what dynamic concurrency is designed to do.
Dynamic concurrency
Functions now provides a dynamic concurrency model that simplifies configuring concurrency for all function
apps running in the same plan.
NOTE
Dynamic concurrency is currently only supported for the Azure Blob, Azure Queue, and Service Bus triggers and requires
you to use the versions listed in the extension support section below.
Benefits
Using dynamic concurrency provides the following benefits:
Simplified configuration : You no longer have to manually determine per-trigger concurrency settings. The
system learns the optimal values for your workload over time.
Dynamic adjustments : Concurrency is adjusted up or down dynamically in real time, which allows the
system to adapt to changing load patterns over time.
Instance health protection : The runtime limits concurrency to levels a function app instance can
comfortably handle. This protects the app from overloading itself by taking on more work than it should.
Improved throughput : Overall throughput is improved because individual instances aren't pulling more
work than they can quickly process. This allows work to be load-balanced more effectively across instances.
For functions that can handle higher loads, concurrency can be increased to values beyond the default config
values, which yields higher throughput.
Dynamic concurrency configuration
Dynamic concurrency can be enabled at the host level in the host.json file. When, enabled any binding
extensions used by your function app that support dynamic concurrency will adjust concurrency dynamically as
needed. Dynamic concurrency settings override any manually configured concurrency settings for triggers that
support dynamic concurrency.
By default, dynamic concurrency is disabled. With dynamic concurrency enabled, concurrency starts at 1 for
each function, and is adjusted up to an optimal value, which is determined by the host.
You can enable dynamic concurrency in your function app by adding the following settings in your host.json file:
{
"version": "2.0",
"concurrency": {
"dynamicConcurrencyEnabled": true,
"snapshotPersistenceEnabled": true
}
}
When SnapshotPersistenceEnabled is true , which is the default, the learned concurrency values are periodically
persisted to storage so new instances start from those values instead of starting from 1 and having to redo the
learning.
Concurrency manager
Behind the scenes, when dynamic concurrency is enabled there's a concurrency manager process running in the
background. This manager constantly monitors instance health metrics, like CPU and thread utilization, and
changes throttles as needed. When one or more throttles are enabled, function concurrency is adjusted down
until the host is healthy again. When throttles are disabled, concurrency is allowed to increase. Various heuristics
are used to intelligently adjust concurrency up or down as needed based on these throttles. Over time,
concurrency for each function stabilizes to a particular level.
Concurrency levels are managed for each individual function. As such, the system balances between resource-
intensive functions that require a low level of concurrency and more lightweight functions that can handle
higher concurrency. The balance of concurrency for each function helps to maintain overall health of the
function app instance.
When dynamic concurrency is enabled, you'll see dynamic concurrency decisions in your logs. For example,
you'll see logs when various throttles are enabled, and whenever concurrency is adjusted up or down for each
function. These logs are written under the Host.Concurrency log category in the traces table.
Extension support
Dynamic concurrency is enabled for a function app at the host level, and any extensions that support dynamic
concurrency run in that mode. Dynamic concurrency requires collaboration between the host and individual
trigger extensions. Only the listed versions of the following extensions support dynamic concurrency.
Azure Queues
The Azure Queue storage trigger has its own message polling loop. When using static config, concurrency is
governed by the BatchSize / NewBatchThreshold config options. When using dynamic concurrency, those
configuration values are ignored. Dynamic concurrency is integrated into the message loop, so the number of
messages fetched per iteration are dynamically adjusted. When throttles are enabled (host is overloaded),
message processing will be paused until throttles are disabled. When throttles are disabled, concurrency will
increase.
To use dynamic concurrency for Queues, you must use version 5.x of the storage extension.
Azure Blobs
Internally, the Azure Blob storage trigger uses the same infrastructure that the Azure Queue Trigger uses. When
new/updated blobs need to be processed, messages are written to a platform managed control queue, and that
queue is processed using the same logic used for QueueTrigger. When dynamic concurrency is enabled,
concurrency for the processing of that control queue will be dynamically managed.
To use dynamic concurrency for Blobs, you must use version 5.x of the storage extension.
Service Bus
The Service Bus trigger currently supports three execution models. Dynamic concurrency affects these execution
models as follows:
Single dispatch topic/queue processing : Each invocation of your function processes a single message.
When using static config, concurrency is governed by the MaxConcurrentCalls config option. When using
dynamic concurrency, that config value is ignored, and concurrency is adjusted dynamically.
Session based single dispatch topic/queue processing : Each invocation of your function processes a
single message. Depending on the number of active sessions for your topic/queue, each instance leases one
or more sessions. Messages in each session are processed serially, to guarantee ordering in a session. When
not using dynamic concurrency, concurrency is governed by the MaxConcurrentSessions setting. With
dynamic concurrency enabled, MaxConcurrentSessions is ignored and the number of sessions each instance is
processing is dynamically adjusted.
Batch processing : Each invocation of your function processes a batch of messages, governed by the
MaxMessageCount setting. Because batch invocations are serial, concurrency for your batch-triggered function
is always one and dynamic concurrency doesn't apply.
To enable your Service Bus trigger to use dynamic concurrency, you must use version 5.x of the Service Bus
extension.
Next steps
For more information, see the following resources:
Best practices for Azure Functions
Azure Functions developer reference
Azure Functions triggers and bindings
Designing Azure Functions for identical input
8/2/2022 • 2 minutes to read • Edit Online
The reality of event-driven and message-based architecture dictates the need to accept identical requests while
preserving data integrity and system stability.
To illustrate, consider an elevator call button. As you press the button, it lights up and an elevator is sent to your
floor. A few moments later, someone else joins you in the lobby. This person smiles at you and presses the
illuminated button a second time. You smile back and chuckle to yourself as you're reminded that the command
to call an elevator is idempotent.
Pressing an elevator call button a second, third, or fourth time has no bearing on the final result. When you
press the button, regardless of the number of times, the elevator is sent to your floor. Idempotent systems, like
the elevator, result in the same outcome no matter how many times identical commands are issued.
When it comes to building applications, consider the following scenarios:
What happens if your inventory control application tries to delete the same product more than once?
How does your human resource application behave if there is more than one request to create an employee
record for the same person?
Where does the money go if your banking app gets 100 requests to make the same withdrawal?
There are many contexts where requests to a function may receive identical commands. Some situations include:
Retry policies sending the same request many times.
Cached commands replayed to the application.
Application errors sending multiple identical requests.
To protect data integrity and system health, an idempotent application contains logic that may contain the
following behaviors:
Verifying of the existence of data before trying to execute a delete.
Checking to see if data already exists before trying to execute a create action.
Reconciling logic that creates eventual consistency in data.
Concurrency controls.
Duplication detection.
Data freshness validation.
Guard logic to verify input data.
Ultimately idempotency is achieved by ensuring a given action is possible and is only executed once.
Next steps
Azure Functions reliable event processing
Concurrency in Azure Functions
Azure Functions error handling and retries
Azure Functions triggers and bindings concepts
8/2/2022 • 6 minutes to read • Edit Online
In this article, you learn the high-level concepts surrounding functions triggers and bindings.
Triggers cause a function to run. A trigger defines how a function is invoked and a function must have exactly
one trigger. Triggers have associated data, which is often provided as the payload of the function.
Binding to a function is a way of declaratively connecting another resource to the function; bindings may be
connected as input bindings, output bindings, or both. Data from bindings is provided to the function as
parameters.
You can mix and match different bindings to suit your needs. Bindings are optional and a function might have
one or multiple input and/or output bindings.
Triggers and bindings let you avoid hardcoding access to other services. Your function receives data (for
example, the content of a queue message) in function parameters. You send data (for example, to create a queue
message) by using the return value of the function.
Consider the following examples of how you could implement different functions.
The Event Grid is used to Event Grid Blob Storage and Cosmos SendGrid
read an image from Blob DB
Storage and a document
from Cosmos DB to send
an email.
These examples aren't meant to be exhaustive, but are provided to illustrate how you can use triggers and
bindings together.
Trigger and binding definitions
Triggers and bindings are defined differently depending on the development language.
For languages that rely on function.json, the portal provides a UI for adding bindings in the Integration tab.
You can also edit the file directly in the portal in the Code + test tab of your function. Visual Studio Code lets
you easily add a binding to a function.json file by following a convenient set of prompts.
In .NET and Java, the parameter type defines the data type for input data. For instance, use string to bind to the
text of a queue trigger, a byte array to read as binary, and a custom type to de-serialize to an object. Since .NET
class library functions and Java functions don't rely on function.json for binding definitions, they can't be created
and edited in the portal. C# portal editing is based on C# script, which uses function.json instead of attributes.
To learn more about how to add bindings to existing functions, see Connect functions to Azure services using
bindings.
For languages that are dynamically typed such as JavaScript, use the dataType property in the function.json file.
For example, to read the content of an HTTP request in binary format, set dataType to binary :
{
"dataType": "binary",
"type": "httpTrigger",
"name": "req",
"direction": "in"
}
Binding direction
All triggers and bindings have a direction property in the function.json file:
For triggers, the direction is always in
Input and output bindings use in and out
Some bindings support a special direction inout . If you use inout , only the Advanced editor is available
via the Integrate tab in the portal.
When you use attributes in a class library to configure triggers and bindings, the direction is provided in an
attribute constructor or inferred from the parameter type.
Supported bindings
This table shows the bindings that are supported in the major versions of the Azure Functions runtime:
2. X A N D
TYPE 1. X H IGH ER 1 T RIGGER IN P UT O UT P UT
Blob storage ✔ ✔ ✔ ✔ ✔
Azure Cosmos ✔ ✔ ✔ ✔ ✔
DB
Azure SQL ✔ ✔ ✔
(preview)
Dapr3 ✔ ✔ ✔ ✔
Event Grid ✔ ✔ ✔ ✔
Event Hubs ✔ ✔ ✔ ✔
HTTP & ✔ ✔ ✔ ✔
webhooks
IoT Hub ✔ ✔ ✔
Kafka2 ✔ ✔ ✔
Mobile Apps ✔ ✔ ✔
Notification ✔ ✔
Hubs
Queue storage ✔ ✔ ✔ ✔
RabbitMQ2 ✔ ✔ ✔
SendGrid ✔ ✔ ✔
Service Bus ✔ ✔ ✔ ✔
SignalR ✔ ✔ ✔ ✔
Table storage ✔ ✔ ✔ ✔
Timer ✔ ✔ ✔
Twilio ✔ ✔ ✔
1 Starting with the version 2.x runtime, all bindings except HTTP and Timer must be registered. See Register
binding extensions.
2 Triggers aren't supported in the Consumption plan. Requires runtime-driven triggers.
3 Supported only in Kubernetes, IoT Edge, and other self-hosted modes only.
For information about which bindings are in preview or are approved for production use, see Supported
languages.
Bindings code examples
Use the following table to find examples of specific binding types that show you how to work with bindings in
your functions. First, choose the language tab that corresponds to your project.
C#
Java
JavaScript
PowerShell
Python
SERVIC E EXA M P L ES SA M P L ES
RabbitMQ Trigger
Output
SendGrid Output
SignalR Trigger
Input
Output
Custom bindings
You can create custom input and output bindings. Bindings must be authored in .NET, but can be consumed
from any supported language. For more information about creating custom bindings, see Creating custom input
and output bindings.
Resources
Binding expressions and patterns
Using the Azure Function return value
How to register a binding expression
Testing:
Strategies for testing your code in Azure Functions
Manually run a non HTTP-triggered function
Handling binding errors
Next steps
Register Azure Functions binding extensions
Azure Functions trigger and binding example
8/2/2022 • 3 minutes to read • Edit Online
This article demonstrates how to configure a trigger and bindings in an Azure Function.
Suppose you want to write a new row to Azure Table storage whenever a new message appears in Azure Queue
storage. This scenario can be implemented using an Azure Queue storage trigger and an Azure Table storage
output binding.
Here's a function.json file for this scenario.
{
"bindings": [
{
"type": "queueTrigger",
"direction": "in",
"name": "order",
"queueName": "myqueue-items",
"connection": "MY_STORAGE_ACCT_APP_SETTING"
},
{
"type": "table",
"direction": "out",
"name": "$return",
"tableName": "outTable",
"connection": "MY_TABLE_STORAGE_ACCT_APP_SETTING"
}
]
}
The first element in the bindings array is the Queue storage trigger. The type and direction properties
identify the trigger. The name property identifies the function parameter that receives the queue message
content. The name of the queue to monitor is in queueName , and the connection string is in the app setting
identified by connection .
The second element in the bindings array is the Azure Table Storage output binding. The type and direction
properties identify the binding. The name property specifies how the function provides the new table row, in
this case by using the function return value. The name of the table is in tableName , and the connection string is
in the app setting identified by connection .
To view and edit the contents of function.json in the Azure portal, click the Advanced editor option on the
Integrate tab of your function.
NOTE
The value of connection is the name of an app setting that contains the connection string, not the connection string
itself. Bindings use connection strings stored in app settings to enforce the best practice that function.json does not
contain service secrets.
C# script
C# class library
JavaScript
Here's C# script code that works with this trigger and binding. Notice that the name of the parameter that
provides the queue message content is order ; this name is required because the name property value in
function.json is order
#r "Newtonsoft.Json"
using Microsoft.Extensions.Logging;
using Newtonsoft.Json.Linq;
// From an incoming queue message that is a JSON object, add fields and write to Table storage
// The method return value creates a new row in Table Storage
public static Person Run(JObject order, ILogger log)
{
return new Person() {
PartitionKey = "Orders",
RowKey = Guid.NewGuid().ToString(),
Name = order["Name"].ToString(),
MobileNumber = order["MobileNumber"].ToString() };
}
You now have a working function that is triggered by an Azure Queue and outputs data to Azure Table storage.
Next steps
Azure Functions binding expression patterns
Register Azure Functions binding extensions
8/2/2022 • 3 minutes to read • Edit Online
Starting with Azure Functions version 2.x, the functions runtime only includes HTTP and timer triggers by
default. Other triggers and bindings are available as separate packages.
.NET class library functions apps use bindings that are installed in the project as NuGet packages. Extension
bundles allow non-.NET functions apps to use the same bindings without having to deal with the .NET
infrastructure.
The following table indicates when and how you register bindings.
C# class library using Visual Studio Use NuGet tools Use NuGet tools
C# class library using Visual Studio N/A Use .NET Core CLI
Code
Extension bundles
By default, extension bundles are used by Java, JavaScript, PowerShell, Python, C# script, and Custom Handler
function apps to work with binding extensions. In cases where extension bundles can't be used, you can explicitly
install binding extensions with your function app project. Extension bundles are supported for version 2.x and
later version of the Functions runtime.
Extension bundles are a way to add a pre-defined set of compatible set of binding extensions to your function
app. Extension bundles are versioned. Each version contains a specific set of binding extensions that are verified
to work together. Select a bundle version based on the extensions that you need in your app.
When you create a non-.NET Functions project from tooling or in the portal, extension bundles are already
enabled in the app's host.json file.
An extension bundle reference is defined by the extensionBundle section in a host.json as follows:
{
"version": "2.0",
"extensionBundle": {
"id": "Microsoft.Azure.Functions.ExtensionBundle",
"version": "[3.3.0, 4.0.0)"
}
}
NOTE
Version 3.x of the extension bundle currently does not include the Table Storage bindings. If your app requires Table
Storage, you will need to continue using the 2.x version for now.
The following table lists the currently available versions of the default
Microsoft.Azure.Functions.ExtensionBundle bundle and links to the extensions they include.
1 Version 3.x of the extension bundle currently doesn't include the Table Storage bindings. If your app requires
Table Storage, you'll need to continue using the 2.x version for now.
NOTE
While you can a specify custom version range in host.json, we recommend you use a version value from this table.
Next steps
Azure Function trigger and binding example
Azure Functions binding expression patterns
8/2/2022 • 6 minutes to read • Edit Online
One of the most powerful features of triggers and bindings is binding expressions. In the function.json file and
in function parameters and code, you can use expressions that resolve to values from various sources.
Most expressions are identified by wrapping them in curly braces. For example, in a queue trigger function,
{queueTrigger} resolves to the queue message text. If the path property for a blob output binding is
container/{queueTrigger} and the function is triggered by a queue message HelloWorld , a blob named
HelloWorld is created.
NOTE
The connection property of triggers and bindings is a special case and automatically resolves values as app settings,
without percent signs.
The following example is an Azure Queue Storage trigger that uses an app setting %input_queue_name% to define
the queue to trigger on.
{
"bindings": [
{
"name": "order",
"type": "queueTrigger",
"direction": "in",
"queueName": "%input_queue_name%",
"connection": "MY_STORAGE_ACCT_APP_SETTING"
}
]
}
[FunctionName("QueueTrigger")]
public static void Run(
[QueueTrigger("%input_queue_name%")]string myQueueItem,
ILogger log)
{
log.LogInformation($"C# Queue trigger function processed: {myQueueItem}");
}
{
"bindings": [
{
"name": "image",
"type": "blobTrigger",
"path": "sample-images/{filename}",
"direction": "in",
"connection": "MyStorageConnection"
},
...
The expression filename can then be used in an output binding to specify the name of the blob being created:
...
{
"name": "imageSmall",
"type": "blob",
"path": "sample-images-sm/{filename}",
"direction": "out",
"connection": "MyStorageConnection"
}
],
}
Function code has access to this same value by using filename as a parameter name:
// C# example of binding to {filename}
public static void Run(Stream image, string filename, Stream imageSmall, ILogger log)
{
log.LogInformation($"Blob trigger processing: {filename}");
// ...
}
The same ability to use binding expressions and patterns applies to attributes in class libraries. In the following
example, the attribute constructor parameters are the same path values as the preceding function.json
examples:
[FunctionName("ResizeImage")]
public static void Run(
[BlobTrigger("sample-images/{filename}")] Stream image,
[Blob("sample-images-sm/{filename}", FileAccess.Write)] Stream imageSmall,
string filename,
ILogger log)
{
log.LogInformation($"Blob trigger processing: {filename}");
// ...
}
You can also create expressions for parts of the file name. In the following example, function is triggered only on
file names that match a pattern: anyname-anyfile.csv
{
"name": "myBlob",
"type": "blobTrigger",
"direction": "in",
"path": "testContainerName/{date}-{filetype}.csv",
"connection": "OrderStorageConnection"
}
For more information on how to use expressions and patterns in the Blob path string, see the Storage blob
binding reference.
Trigger metadata
In addition to the data payload provided by a trigger (such as the content of the queue message that triggered a
function), many triggers provide additional metadata values. These values can be used as input parameters in
C# and F# or properties on the context.bindings object in JavaScript.
For example, an Azure Queue storage trigger supports the following properties:
QueueTrigger - triggering message content if a valid string
DequeueCount
ExpirationTime
Id
InsertionTime
NextVisibleTime
PopReceipt
These metadata values are accessible in function.json file properties. For example, suppose you use a queue
trigger and the queue message contains the name of a blob you want to read. In the function.json file, you can
use queueTrigger metadata property in the blob path property, as shown in the following example:
{
"bindings": [
{
"name": "myQueueItem",
"type": "queueTrigger",
"queueName": "myqueue-items",
"connection": "MyStorageConnection",
},
{
"name": "myInputBlob",
"type": "blob",
"path": "samples-workitems/{queueTrigger}",
"direction": "in",
"connection": "MyStorageConnection"
}
]
}
Details of metadata properties for each trigger are described in the corresponding reference article. For an
example, see queue trigger metadata. Documentation is also available in the Integrate tab of the portal, in the
Documentation section below the binding configuration area.
JSON payloads
When a trigger payload is JSON, you can refer to its properties in configuration for other bindings in the same
function and in function code.
The following example shows the function.json file for a webhook function that receives a blob name in JSON:
{"BlobName":"HelloWorld.txt"} . A Blob input binding reads the blob, and the HTTP output binding returns the
blob contents in the HTTP response. Notice that the Blob input binding gets the blob name by referring directly
to the BlobName property ( "path": "strings/{BlobName}" )
{
"bindings": [
{
"name": "info",
"type": "httpTrigger",
"direction": "in",
"webHookType": "genericJson"
},
{
"name": "blobContents",
"type": "blob",
"direction": "in",
"path": "strings/{BlobName}",
"connection": "AzureWebJobsStorage"
},
{
"name": "res",
"type": "http",
"direction": "out"
}
]
}
For this to work in C# and F#, you need a class that defines the fields to be deserialized, as in the following
example:
using System.Net;
using Microsoft.Extensions.Logging;
public static HttpResponseMessage Run(HttpRequestMessage req, BlobInfo info, string blobContents, ILogger
log)
{
if (blobContents == null) {
return req.CreateResponse(HttpStatusCode.NotFound);
}
log.LogInformation($"Processing: {info.BlobName}");
Dot notation
If some of the properties in your JSON payload are objects with properties, you can refer to those directly by
using dot ( . ) notation. This notation doesn't work for Cosmos DB or Table storage bindings.
For example, suppose your JSON looks like this:
{
"BlobName": {
"FileName":"HelloWorld",
"Extension":"txt"
}
}
You can refer directly to FileName as BlobName.FileName . With this JSON format, here's what the path property
in the preceding example would look like:
"path": "strings/{BlobName.FileName}.{BlobName.Extension}",
Create GUIDs
The {rand-guid} binding expression creates a GUID. The following blob path in a function.json file creates a
blob with a name like 50710cb5-84b9-4d87-9d83-a03d6976a682.txt.
{
"type": "blob",
"name": "blobOutput",
"direction": "out",
"path": "my-output-container/{rand-guid}.txt"
}
Current time
The binding expression DateTime resolves to DateTime.UtcNow . The following blob path in a function.json file
creates a blob with a name like 2018-02-16T17-59-55Z.txt.
{
"type": "blob",
"name": "blobOutput",
"direction": "out",
"path": "my-output-container/{DateTime}.txt"
}
Binding at runtime
In C# and other .NET languages, you can use an imperative binding pattern, as opposed to the declarative
bindings in function.json and attributes. Imperative binding is useful when binding parameters need to be
computed at runtime rather than design time. To learn more, see the C# developer reference or the C# script
developer reference.
Next steps
Using the Azure Function return value
Using the Azure Function return value
8/2/2022 • 2 minutes to read • Edit Online
Here's C# code that uses the return value for an output binding, followed by an async example:
[FunctionName("QueueTrigger")]
[return: Blob("output-container/{id}")]
public static string Run([QueueTrigger("inputqueue")]WorkItem input, ILogger log)
{
string json = string.Format("{{ \"id\": \"{0}\" }}", input.Id);
log.LogInformation($"C# script processed queue message. Item={json}");
return json;
}
[FunctionName("QueueTrigger")]
[return: Blob("output-container/{id}")]
public static Task<string> Run([QueueTrigger("inputqueue")]WorkItem input, ILogger log)
{
string json = string.Format("{{ \"id\": \"{0}\" }}", input.Id);
log.LogInformation($"C# script processed queue message. Item={json}");
return Task.FromResult(json);
}
Next steps
Handle Azure Functions binding errors
Shifting from Express.js to Azure Functions
8/2/2022 • 3 minutes to read • Edit Online
Express.js is one of the most popular Node.js frameworks for web developers and remains an excellent choice
for building apps that serve API endpoints.
When migrating code to a serverless architecture, refactoring Express.js endpoints affects the following areas:
Middleware : Express.js features a robust collection of middleware. Many middleware modules are no
longer required in light of Azure Functions and Azure API Management capabilities. Ensure you can
replicate or replace any logic handled by essential middleware before migrating endpoints.
Differing APIs : The API used to process both requests and responses differs among Azure Functions
and Express.js. The following example details the required changes.
Default route : By default, Azure Functions endpoints are exposed under the api route. Routing rules
are configurable via routePrefix in the host.json file.
Configuration and conventions : A Functions app uses the function.json file to define HTTP verbs,
define security policies, and can configure the function's input and output. By default, the folder name that
which contains the function files defines the endpoint name, but you can change the name via the route
property in the function.json file.
TIP
Learn more through the interactive tutorial Refactor Node.js and Express APIs to Serverless APIs with Azure Functions.
Example
Express.js
The following example shows a typical Express.js GET endpoint.
// server.js
app.get('/hello', (req, res) => {
try {
res.send("Success!");
} catch(error) {
const err = JSON.stringify(error);
res.status(500).send(`Request error. ${err}`);
}
});
When a GET request is sent to /hello , an HTTP 200 response containing Success is returned. If the endpoint
encounters an error, the response is an HTTP 500 with the error details.
Azure Functions
Azure Functions organizes configuration and code files into a single folder for each function. By default, the
name of the folder dictates the function name.
For instance, a function named hello has a folder with the following files.
| - hello
| - function.json
| - index.js
The following example implements the same result as the above Express.js endpoint, but with Azure Functions.
JavaScript
TypeScript
// hello/index.js
module.exports = async function (context, req) {
try {
context.res = { body: "Success!" };
} catch(error) {
const err = JSON.stringify(error);
context.res = {
status: 500,
body: `Request error. ${err}`
};
}
};
The following function.json file holds configuration information for the function.
{
"bindings": [
{
"authLevel": "function",
"type": "httpTrigger",
"direction": "in",
"name": "req",
"methods": ["get"]
},
{
"type": "http",
"direction": "out",
"name": "res"
}
]
}
By defining get in the methods array, the function is available to HTTP GET requests. If you want to your API to
accept support POST requests, you can add post to the array as well.
Next steps
Learn more with the interactive tutorial Refactor Node.js and Express APIs to Serverless APIs with Azure
Functions
Securing Azure Functions
8/2/2022 • 18 minutes to read • Edit Online
In many ways, planning for secure development, deployment, and operation of serverless functions is much the
same as for any web-based or cloud hosted application. Azure App Service provides the hosting infrastructure
for your function apps. This article provides security strategies for running your function code, and how App
Service can help you secure your functions.
The platform components of App Service, including Azure VMs, storage, network connections, web frameworks,
management and integration features, are actively secured and hardened. App Service goes through vigorous
compliance checks on a continuous basis to make sure that:
Your app resources are secured from the other customers' Azure resources.
VM instances and runtime software are regularly updated to address newly discovered vulnerabilities.
Communication of secrets (such as connection strings) between your app and other Azure resources (such as
SQL Database) stays within Azure and doesn't cross any network boundaries. Secrets are always encrypted
when stored.
All communication over the App Service connectivity features, such as hybrid connection, is encrypted.
Connections with remote management tools like Azure PowerShell, Azure CLI, Azure SDKs, REST APIs, are all
encrypted.
24-hour threat management protects the infrastructure and platform against malware, distributed denial-of-
service (DDoS), man-in-the-middle (MITM), and other threats.
For more information on infrastructure and platform security in Azure, see Azure Trust Center.
For a set of security recommendations that follow the Azure Security Benchmark, see Azure Security Baseline for
Azure Functions.
Secure operation
This section guides you on configuring and running your function app as securely as possible.
Defender for Cloud
Defender for Cloud integrates with your function app in the portal. It provides, for free, a quick assessment of
potential configuration-related security vulnerabilities. Function apps running in a dedicated plan can also use
Defender for Cloud's enhanced security features for an additional cost. To learn more, see Protect your Azure
App Service web apps and APIs.
Log and monitor
One way to detect attacks is through activity monitoring and logging analytics. Functions integrates with
Application Insights to collects log, performance, and error data for your function app. Application Insights
automatically detects performance anomalies and includes powerful analytics tools to help you diagnose issues
and to understand how your functions are used. To learn more, see Monitor Azure Functions.
Functions also integrates with Azure Monitor Logs to enable you to consolidate function app logs with system
events for easier analysis. You can use diagnostic settings to configure streaming export of platform logs and
metrics for your functions to the destination of your choice, such as a Logs Analytics workspace. To learn more,
see Monitoring Azure Functions with Azure Monitor Logs.
For enterprise-level threat detection and response automation, stream your logs and events to a Logs Analytics
workspace. You can then connect Microsoft Sentinel to this workspace. To learn more, see What is Microsoft
Sentinel.
For more security recommendations for observability, see the Azure security baseline for Azure Functions.
Require HTTPS
By default, clients can connect to function endpoints by using both HTTP or HTTPS. You should redirect HTTP to
HTTPs because HTTPS uses the SSL/TLS protocol to provide a secure connection, which is both encrypted and
authenticated. To learn how, see Enforce HTTPS.
When you require HTTPS, you should also Require the latest TLS version. To learn how, see Enforce TLS versions.
For more information, see Secure connections (TLS).
Function access keys
Functions lets you use keys to make it harder to access your HTTP function endpoints during development.
Unless the HTTP access level on an HTTP triggered function is set to anonymous , requests must include an API
access key in the request.
While keys provide a default security mechanism, you may want to consider additional options to secure an
HTTP endpoint in production. For example, it's generally not a good practice to distribute shared secret in public
apps. If your function is being called from a public client, you may want to consider implementing another
security mechanism. To learn more, see Secure an HTTP endpoint in production.
When you renew your function key values, you must manually redistribute the updated key values to all clients
that call your function.
Authorization scopes (function-level)
There are two access scopes for function-level keys:
Function : These keys apply only to the specific functions under which they are defined. When used as an
API key, these only allow access to that function.
Host : Keys with a host scope can be used to access all functions within the function app. When used as an
API key, these allow access to any function within the function app.
Each key is named for reference, and there is a default key (named "default") at the function and host level.
Function keys take precedence over host keys. When two keys are defined with the same name, the function key
is always used.
Master key (admin-level)
Each function app also has an admin-level host key named _master . In addition to providing host-level access
to all functions in the app, the master key also provides administrative access to the runtime REST APIs. This key
cannot be revoked. When you set an access level of admin , requests must use the master key; any other key
results in access failure.
Cau t i on
Due to the elevated permissions in your function app granted by the master key, you should not share this key
with third parties or distribute it in native client applications. Use caution when choosing the admin access level.
System key
Specific extensions may require a system-managed key to access webhook endpoints. System keys are designed
for extension-specific function endpoints that called by internal components. For example, the Event Grid trigger
requires that the subscription use a system key when calling the trigger endpoint. Durable Functions also uses
system keys to call Durable Task extension APIs.
The scope of system keys is determined by the extension, but it generally applies to the entire function app.
System keys can only be created by specific extensions, and you can't explicitly set their values. Like other keys,
you can generate a new value for the key from the portal or by using the key APIs.
Keys comparison
The following table compares the uses for various kinds of access keys:
A C T IO N SC O P E VA L ID K EY S
To learn more about access keys, see the HTTP trigger binding article.
Secret repositories
By default, keys are stored in a Blob storage container in the account provided by the AzureWebJobsStorage
setting. You can use the AzureWebJobsSecretStorageType setting to override this behavior and store keys in a
different location.
LO C AT IO N VA L UE DESC RIP T IO N
When using Key Vault for key storage, the app settings you need depend on the managed identity type.
Functions runtime version 3.x only supports system-assigned managed identities.
Version 4.x
Version 3.x
SET T IN G N A M E SY ST EM - A SSIGN ED USER- A SSIGN ED A P P REGIST RAT IO N
AzureWebJobsSecretStorag ✓ ✓ ✓
eKeyVaultUri
AzureWebJobsSecretStorag X ✓ ✓
eKeyVaultClientId
AzureWebJobsSecretStorag X X ✓
eKeyVaultClientSecret
AzureWebJobsSecretStorag X X ✓
eKeyVaultTenantId
Authentication/authorization
While function keys can provide some mitigation for unwanted access, the only way to truly secure your
function endpoints is by implementing positive authentication of clients accessing your functions. You can then
make authorization decisions based on identity.
Enable App Service Authentication/Authorization
The App Service platform lets you use Azure Active Directory (AAD) and several third-party identity providers to
authenticate clients. You can use this strategy to implement custom authorization rules for your functions, and
you can work with user information from your function code. To learn more, see Authentication and
authorization in Azure App Service and Working with client identities.
Use Azure API Management (APIM) to authenticate requests
APIM provides a variety of API security options for incoming requests. To learn more, see API Management
authentication policies. With APIM in place, you can configure your function app to accept requests only from
the IP address of your APIM instance. To learn more, see IP address restrictions.
Permissions
As with any application or service, the goal is run your function app with the lowest possible permissions.
User management permissions
Functions supports built-in Azure role-based access control (Azure RBAC). Azure roles supported by Functions
are Contributor, Owner, and Reader.
Permissions are effective at the function app level. The Contributor role is required to perform most function
app-level tasks. You also need the Contributor role along with the Monitoring Reader permission to be able to
view log data in Application Insights. Only the Owner role can delete a function app.
Organize functions by privilege
Connection strings and other credentials stored in application settings gives all of the functions in the function
app the same set of permissions in the associated resource. Consider minimizing the number of functions with
access to specific credentials by moving functions that don't use those credentials to a separate function app.
You can always use techniques such as function chaining to pass data between functions in different function
apps.
Managed identities
A managed identity from Azure Active Directory (Azure AD) allows your app to easily access other Azure AD-
protected resources such as Azure Key Vault. The identity is managed by the Azure platform and does not
require you to provision or rotate any secrets. For more about managed identities in Azure AD, see Managed
identities for Azure resources.
Your application can be granted two types of identities:
A system-assigned identity is tied to your application and is deleted if your app is deleted. An app can
only have one system-assigned identity.
A user-assigned identity is a standalone Azure resource that can be assigned to your app. An app can have
multiple user-assigned identities.
Managed identities can be used in place of secrets for connections from some triggers and bindings. See
Identity-based connections.
For more information, see How to use managed identities for App Service and Azure Functions.
Restrict CORS access
Cross-origin resource sharing (CORS) is a way to allow web apps running in another domain to make requests
to your HTTP trigger endpoints. App Service provides built-in support for handing the required CORS headers in
HTTP requests. CORS rules are defined on a function app level.
While it's tempting to use a wildcard that allows all sites to access your endpoint. But, this defeats the purpose of
CORS, which is to help prevent cross-site scripting attacks. Instead, add a separate CORS entry for the domain of
each web app that must access your endpoint.
Managing secrets
To be able to connect to the various services and resources need to run your code, function apps need to be able
to access secrets, such as connection strings and service keys. This section describes how to store secrets
required by your functions.
Never store secrets in your function code.
Application settings
By default, you store connection strings and secrets used by your function app and bindings as application
settings. This makes these credentials available to both your function code and the various bindings used by the
function. The application setting (key) name is used to retrieve the actual value, which is the secret.
For example, every function app requires an associated storage account, which is used by the runtime. By
default, the connection to this storage account is stored in an application setting named AzureWebJobsStorage .
App settings and connection strings are stored encrypted in Azure. They're decrypted only before being injected
into your app's process memory when the app starts. The encryption keys are rotated regularly. If you prefer to
instead manage the secure storage of your secrets, the app setting should instead be references to Azure Key
Vault.
You can also encrypt settings by default in the local.settings.json file when developing functions on your local
computer. To learn more, see the IsEncrypted property in the local settings file.
Key Vault references
While application settings are sufficient for most many functions, you may want to share the same secrets
across multiple services. In this case, redundant storage of secrets results in more potential vulnerabilities. A
more secure approach is to a central secret storage service and use references to this service instead of the
secrets themselves.
Azure Key Vault is a service that provides centralized secrets management, with full control over access policies
and audit history. You can use a Key Vault reference in the place of a connection string or key in your application
settings. To learn more, see Use Key Vault references for App Service and Azure Functions.
Identity-based connections
Identities may be used in place of secrets for connecting to some resources. This has the advantage of not
requiring the management of a secret, and it provides more fine-grained access control and auditing.
When you are writing code that creates the connection to Azure services that support Azure AD authentication,
you can choose to use an identity instead of a secret or connection string. Details for both connection methods
are covered in the documentation for each service.
Some Azure Functions trigger and binding extensions may be configured using an identity-based connection.
Today, this includes the Azure Blob and Azure Queue extensions. For information about how to configure these
extensions to use an identity, see How to use identity-based connections in Azure Functions.
Set usage quotas
Consider setting a usage quota on functions running in a Consumption plan. When you set a daily GB-sec limit
on the sum total execution of functions in your function app, execution is stopped when the limit is reached. This
could potentially help mitigate against malicious code executing your functions. To learn how to estimate
consumption for your functions, see Estimating Consumption plan costs.
Data validation
The triggers and bindings used by your functions don't provide any additional data validation. Your code must
validate any data received from a trigger or input binding. If an upstream service is compromised, you don't
want unvalidated inputs flowing through your functions. For example, if your function stores data from an Azure
Storage queue in a relational database, you must validate the data and parameterize your commands to avoid
SQL injection attacks.
Don't assume that the data coming into your function has already been validated or sanitized. It's also a good
idea to verify that the data being written to output bindings is valid.
Handle errors
While it seems basic, it's important to write good error handling in your functions. Unhandled errors bubble-up
to the host and are handled by the runtime. Different bindings handle processing of errors differently. To learn
more, see Azure Functions error handling.
Disable remote debugging
Make sure that remote debugging is disabled, except when you are actively debugging your functions. You can
disable remote debugging in the General Settings tab of your function app Configuration in the portal.
Restrict CORS access
Azure Functions supports cross-origin resource sharing (CORS). CORS is configured in the portal and through
the Azure CLI. The CORS allowed origins list applies at the function app level. With CORS enabled, responses
include the Access-Control-Allow-Origin header. For more information, see Cross-origin resource sharing.
Don't use wildcards in your allowed origins list. Instead, list the specific domains from which you expect to get
requests.
Store data encrypted
Azure Storage encrypts all data in a storage account at rest. For more information, see Azure Storage encryption
for data at rest.
By default, data is encrypted with Microsoft-managed keys. For additional control over encryption keys, you can
supply customer-managed keys to use for encryption of blob and file data. These keys must be present in Azure
Key Vault for Functions to be able to access the storage account. To learn more, see Encryption at rest using
customer-managed keys.
Secure deployment
Azure Functions tooling an integration make it easy to publish local function project code to Azure. It's
important to understand how deployment works when considering security for an Azure Functions topology.
Deployment credentials
App Service deployments require a set of deployment credentials. These deployment credentials are used to
secure your function app deployments. Deployment credentials are managed by the App Service platform and
are encrypted at rest.
There are two kinds of deployment credentials:
User-level credentials : one set of credentials for the entire Azure account. It can be used to deploy to
App Service for any app, in any subscription, that the Azure account has permission to access. It's the
default set that's surfaced in the portal GUI (such as the Over view and Proper ties of the app's resource
page). When a user is granted app access via Role-Based Access Control (RBAC) or coadmin permissions,
that user can use their own user-level credentials until the access is revoked. Do not share these
credentials with other Azure users.
App-level credentials : one set of credentials for each app. It can be used to deploy to that app only. The
credentials for each app are generated automatically at app creation. They can't be configured manually,
but can be reset anytime. For a user to be granted access to app-level credentials via (RBAC), that user
must be contributor or higher on the app (including Website Contributor built-in role). Readers are not
allowed to publish, and can't access those credentials.
At this time, Key Vault isn't supported for deployment credentials. To learn more about managing deployment
credentials, see Configure deployment credentials for Azure App Service.
Disable FTP
By default, each function app has an FTP endpoint enabled. The FTP endpoint is accessed using deployment
credentials.
FTP isn't recommended for deploying your function code. FTP deployments are manual, and they require you to
synchronize triggers. To learn more, see FTP deployment.
When you're not planning on using FTP, you should disable it in the portal. If you do choose to use FTP, you
should enforce FTPS.
Secure the scm endpoint
Every function app has a corresponding scm service endpoint that used by the Advanced Tools (Kudu) service
for deployments and other App Service site extensions. The scm endpoint for a function app is always a URL in
the form https://<FUNCTION_APP_NAME.scm.azurewebsites.net> . When you use network isolation to secure your
functions, you must also account for this endpoint.
By having a separate scm endpoint, you can control deployments and other advanced tools functionalities for
function app that are isolated or running in a virtual network. The scm endpoint supports both basic
authentication (using deployment credentials) and single sign-on with your Azure portal credentials. To learn
more, see Accessing the Kudu service.
Continuous security validation
Since security needs to be considered a every step in the development process, it make sense to also implement
security validations in a continuous deployment environment. This is sometimes called DevSecOps. Using Azure
DevOps for your deployment pipeline let's you integrate validation into the deployment process. For more
information, see Learn how to add continuous security validation to your CI/CD pipeline.
Network security
Restricting network access to your function app lets you control who can access your functions endpoints.
Functions leverages App Service infrastructure to enable your functions to access resources without using
internet-routable addresses or to restrict internet access to a function endpoint. To learn more about these
networking options, see Azure Functions networking options.
Set access restrictions
Access restrictions allow you to define lists of allow/deny rules to control traffic to your app. Rules are evaluated
in priority order. If there are no rules defined, then your app will accept traffic from any address. To learn more,
see Azure App Service Access Restrictions.
Private site access
Azure Private Endpoint is a network interface that connects you privately and securely to a service powered by
Azure Private Link. Private Endpoint uses a private IP address from your virtual network, effectively bringing the
service into your virtual network.
You can use Private Endpoint for your functions hosted in the Premium and App Service plans.
If you want to make calls to Private Endpoints, then you must make sure that your DNS lookups resolve to the
private endpoint. You can enforce this behavior in one of the following ways:
Integrate with Azure DNS private zones. When your virtual network doesn't have a custom DNS server, this is
done automatically.
Manage the private endpoint in the DNS server used by your app. To do this you must know the private
endpoint address and then point the endpoint you are trying to reach to that address using an A record.
Configure your own DNS server to forward to Azure DNS private zones.
To learn more, see using Private Endpoints for Web Apps.
Deploy your function app in isolation
Azure App Service Environment (ASE) provides a dedicated hosting environment in which to run your functions.
ASE lets you configure a single front-end gateway that you can use to authenticate all incoming requests. For
more information, see Configuring a Web Application Firewall (WAF) for App Service Environment.
Use a gateway service
Gateway services, such as Azure Application Gateway and Azure Front Door let you set up a Web Application
Firewall (WAF). WAF rules are used to monitor or block detected attacks, which provide an extra layer of
protection for your functions. To set up a WAF, your function app needs to be running in an ASE or using Private
Endpoints (preview). To learn more, see Using Private Endpoints.
Next steps
Azure Security Baseline for Azure Functions
Azure Functions diagnostics
Monitor executions in Azure Functions
8/2/2022 • 7 minutes to read • Edit Online
Azure Functions offers built-in integration with Azure Application Insights to monitor functions executions. This
article provides an overview of the monitoring capabilities provided by Azure for monitoring Azure Functions.
Application Insights collects log, performance, and error data. By automatically detecting performance
anomalies and featuring powerful analytics tools, you can more easily diagnose issues and better understand
how your functions are used. These tools are designed to help you continuously improve performance and
usability of your functions. You can even use Application Insights during local function app project development.
For more information, see What is Application Insights?.
As Application Insights instrumentation is built into Azure Functions, you need a valid instrumentation key to
connect your function app to an Application Insights resource. The instrumentation key is added to your
application settings as you create your function app resource in Azure. If your function app doesn't already have
this key, you can set it manually.
You can also monitor the function app itself by using Azure Monitor. To learn more, see Monitoring Azure
Functions with Azure Monitor.
IMPORTANT
Application Insights has a sampling feature that can protect you from producing too much telemetry data on completed
executions at times of peak load. Sampling is enabled by default. If you appear to be missing data, you might need to
adjust the sampling settings to fit your particular monitoring scenario. To learn more, see Configure sampling.
The full list of Application Insights features available to your function app is detailed in Application Insights for
Azure Functions supported features.
NOTE
In addition to data from your functions and the Functions host, you can also collect data from the Functions scale
controller.
The host.json file configuration determines how much logging a functions app sends to Application Insights.
To learn more about log levels, see Configure log levels.
By assigning logged items to a category, you have more control over telemetry generated from specific sources
in your function app. Categories make it easier to run analytics over collected data. Traces written from your
function code are assigned to individual categories based on the function name. To learn more about categories,
see Configure categories.
Custom telemetry data
In C#, JavaScript, and Python, you can use an Application Insights SDK to write custom telemetry data.
Dependencies
Starting with version 2.x of Functions, Application Insights automatically collects data on dependencies for
bindings that use certain client SDKs. Application Insights distributed tracing and dependency tracking aren't
currently supported for C# apps running in an isolated process. Application Insights collects data on the
following dependencies:
Azure Cosmos DB
Azure Event Hubs
Azure Service Bus
Azure Storage services (Blob, Queue, and Table)
HTTP requests and database calls using SqlClient are also captured. For the complete list of dependencies
supported by Application Insights, see automatically tracked dependencies.
Application Insights generates an application map of collected dependency data. The following is an example of
an application map of an HTTP trigger function with a Queue storage output binding.
Dependencies are written at the Information level. If you filter at Warning or above, you won't see the
dependency data. Also, automatic collection of dependencies happens at a non-user scope. To capture
dependency data, make sure the level is set to at least Information outside the user scope (
Function.<YOUR_FUNCTION_NAME>.User ) in your host.
In addition to automatic dependency data collection, you can also use one of the language-specific Application
Insights SDKs to write custom dependency information to the logs. For an example how to write custom
dependencies, see one of the following language-specific examples:
Log custom telemetry in C# functions
Log custom telemetry in JavaScript functions
Log custom telemetry in Python functions
Writing to logs
The way that you write to logs and the APIs you use depend on the language of your function app project.
See the developer guide for your language to learn more about writing logs from your functions.
C# (.NET class library)
Java
JavaScript
PowerShell
Python
Analyze data
By default, the data collected from your function app is stored in Application Insights. In the Azure portal,
Application Insights provides an extensive set of visualizations of your telemetry data. You can drill into error
logs and query events and metrics. To learn more, including basic examples of how to view and query your
collected data, see Analyze Azure Functions telemetry in Application Insights.
Streaming Logs
While developing an application, you often want to see what's being written to the logs in near real time when
running in Azure.
There are two ways to view a stream of the log data being generated by your function executions.
Built-in log streaming : the App Service platform lets you view a stream of your application log files.
This stream is equivalent to the output seen when you debug your functions during local development
and when you use the Test tab in the portal. All log-based information is displayed. For more
information, see Stream logs. This streaming method supports only a single instance, and can't be used
with an app running on Linux in a Consumption plan.
Live Metrics Stream : when your function app is connected to Application Insights, you can view log
data and other metrics in near real time in the Azure portal using Live Metrics Stream. Use this method
when monitoring functions running on multiple-instances or on Linux in a Consumption plan. This
method uses sampled data.
Log streams can be viewed both in the portal and in most local development environments. To learn how to
enable log streams, see Enable streaming execution logs in Azure Functions.
Diagnostic logs
Application Insights lets you export telemetry data to long-term storage or other analysis services.
Because Functions also integrates with Azure Monitor, you can also use diagnostic settings to send telemetry
data to various destinations, including Azure Monitor logs. To learn more, see Monitoring Azure Functions with
Azure Monitor Logs.
Report issues
To report an issue with Application Insights integration in Functions, or to make a suggestion or request, create
an issue in GitHub.
Next steps
For more information, see the following resources:
Application Insights
ASP.NET Core logging
Azure Functions diagnostics overview
8/2/2022 • 2 minutes to read • Edit Online
When you’re running a function app, you want to be prepared for any issues that may arise, from 4xx errors to
trigger failures. Azure Functions diagnostics is an intelligent and interactive experience to help you troubleshoot
your function app with no configuration or extra cost. When you do run into issues with your function app,
Azure Functions diagnostics points out what’s wrong. It guides you to the right information to more easily and
quickly troubleshoot and resolve the issue. This article shows you the basics of how to use Azure Functions
diagnostics to more quickly diagnose and solve common function app issues.
After selecting a tile, you can see a list of topics related to the issue described in the tile. These topics provide
snippets of notable information from the full report. Select any of these topics to investigate the issues further.
Also, you can select View Full Repor t to explore all the topics on a single page.
Next steps
You can ask questions or provide feedback on Azure Functions diagnostics at UserVoice. Include [Diag] in the
title of your feedback.
Monitor your function apps
Estimating Consumption plan costs
8/2/2022 • 9 minutes to read • Edit Online
There are currently three types of hosting plans for an app that runs in Azure Functions, with each plan having
its own pricing model:
Consumption You're only charged for the time that your function app runs.
This plan includes a free grant on a per subscription basis.
Premium Provides you with the same features and scaling mechanism
as the Consumption plan, but with enhanced performance
and VNET access. Cost is based on your chosen pricing tier.
To learn more, see Azure Functions Premium plan.
Dedicated (App Ser vice) When you need to run in dedicated VMs or in isolation, use
(basic tier or higher) custom images, or want to use your excess App Service plan
capacity. Uses regular App Service plan billing. Cost is based
on your chosen pricing tier.
You chose the plan that best supports your function performance and cost requirements. To learn more, see
Azure Functions scale and hosting.
This article deals only with the Consumption plan, since this plan results in variable costs. This article supersedes
the Consumption plan cost billing FAQ article.
Durable Functions can also run in a Consumption plan. To learn more about the cost considerations when using
Durable Functions, see Durable Functions billing.
NOTE
While CPU usage isn't directly considered in execution cost, it can have an impact on the cost when it affects the execution
time of the function.
For an HTTP-triggered function, when an error occurs before your function code begins to execute you aren't
charged for an execution. This means that 401 responses from the platform due to API key validation or the App
Service Authentication / Authorization feature don't count against your execution cost. Similarly, 5xx status code
responses aren't counted when they occur in the platform prior to a function processing the request. A 5xx
response generated by the platform after your function code has started to execute is still counted as an
execution, even if the error isn't raised by your function code.
Storage account Each function app requires that you have an associated
General Purpose Azure Storage account, which is billed
separately. This account is used internally by the Functions
runtime, but you can also use it for Storage triggers and
bindings. If you don't have a storage account, one is created
for you when the function app is created. To learn more, see
Storage account requirements.
Network bandwidth You don't pay for data transfer between Azure services in the
same region. However, you can incur costs for outbound
data transfers to another region or outside of Azure. To learn
more, see Bandwidth pricing details.
Portal
Azure CLI
Azure PowerShell
Use Azure Monitor metrics explorer to view cost-related data for your Consumption plan function apps in a
graphical format.
1. In the Azure portal, navigate to your function app.
2. In the left panel, scroll down to Monitoring and choose Metrics .
3. From Metric , choose Function Execution Count and Sum for Aggregation . This adds the sum of the
execution counts during chosen period to the chart.
4. Select Add metric and repeat steps 2-4 to add Function Execution Units to the chart.
The resulting chart contains the totals for both execution metrics in the chosen time range, which in this case is
two hours.
As the number of execution units is so much greater than the execution count, the chart just shows execution
units.
This chart shows a total of 1.11 billion Function Execution Units consumed in a two-hour period, measured in
MB-milliseconds. To convert to GB-seconds, divide by 1024000. In this example, the function app consumed
1110000000 / 1024000 = 1083.98 GB-seconds. You can take this value and multiply by the current price of
execution time on the Functions pricing page, which gives you the cost of these two hours, assuming you've
already used any free grants of execution time.
Function-level metrics
Function execution units are a combination of execution time and your memory usage, which makes it a difficult
metric for understanding memory usage. Memory data isn't a metric currently available through Azure Monitor.
However, if you want to optimize the memory usage of your app, can use the performance counter data
collected by Application Insights.
If you haven't already done so, enable Application Insights in your function app. With this integration enabled,
you can query this telemetry data in the portal.
You can use either Azure Monitor metrics explorer in the Azure portal or REST APIs to get Monitor Metrics data.
Determine memory usage
Under Monitoring , select Logs (Analytics) , then copy the following telemetry query and paste it into the
query window and select Run . This query returns the total memory usage at each sampled time.
performanceCounters
| where name == "Private Bytes"
| project timestamp, name, value
Determine duration
Azure Monitor tracks metrics at the resource level, which for Functions is the function app. Application Insights
integration emits metrics on a per-function basis. Here's an example analytics query to get the average duration
of a function:
customMetrics
| where name contains "Duration"
| extend averageDuration = valueSum / valueCount
| summarize averageDurationMilliseconds=avg(averageDuration) by name
Next steps
Learn more about Monitoring function apps
Work with Azure Functions Proxies
8/2/2022 • 10 minutes to read • Edit Online
This article explains how to configure and work with Azure Functions Proxies. With this feature, you can specify
endpoints on your function app that are implemented by another resource. You can use these proxies to break a
large API into multiple function apps (as in a microservice architecture), while still presenting a single API
surface for clients.
Standard Functions billing applies to proxy executions. For more information, see Azure Functions pricing.
NOTE
Proxies is available in Azure Functions versions 1.x to 3.x.
You should also consider using Azure API Management for your application. It provides the same capabilities as Functions
Proxies as well as other tools for building and maintaining APIs, such as OpenAPI integration, rate limiting, and advanced
policies.
Create a proxy
This section shows you how to create a proxy in the Functions portal.
NOTE
Not all languages and operating system combinations support in-portal editing. If you're unable to create a proxy in the
portal, you can instead manually create a proxies.json file in the root of your function app project folder. To learn more
about portal editing support, see Language support details.
Use variables
The configuration for a proxy does not need to be static. You can condition it to use variables from the original
client request, the back-end response, or application settings.
Reference local functions
You can use localhost to reference a function inside the same function app directly, without a roundtrip proxy
request.
"backendUri": "https://localhost/api/httptriggerC#1" will reference a local HTTP triggered function at the route
/api/httptriggerC#1
NOTE
If your function uses function, admin or sys authorization levels, you will need to provide the code and clientId, as per the
original function URL. In this case the reference would look like:
"backendUri": "https://localhost/api/httptriggerC#1?code=<keyvalue>&clientId=<keyname>" We recommend
storing these keys in application settings and referencing those in your proxies. This avoids storing secrets in your source
code.
TIP
Use application settings for back-end hosts when you have multiple deployments or test environments. That way, you can
make sure that you are always talking to the right back-end for that environment.
Troubleshoot Proxies
By adding the flag "debug":true to any proxy in your proxies.json you will enable debug logging. Logs are
stored in D:\home\LogFiles\Application\Proxies\DetailedTrace and accessible through the advanced tools
(kudu). Any HTTP responses will also contain a Proxy-Trace-Location header with a URL to access the log file.
You can debug a proxy from the client side by adding a Proxy-Trace-Enabled header set to true . This will also
log a trace to the file system, and return the trace URL as a header in the response.
Block proxy traces
For security reasons you may not want to allow anyone calling your service to generate a trace. They will not be
able to access the trace contents without your login credentials, but generating the trace consumes resources
and exposes that you are using Function Proxies.
Disable traces altogether by adding "debug":false to any particular proxy in your proxies.json .
Advanced configuration
The proxies that you configure are stored in a proxies.json file, which is located in the root of a function app
directory. You can manually edit this file and deploy it as part of your app when you use any of the deployment
methods that Functions supports.
TIP
If you have not set up one of the deployment methods, you can also work with the proxies.json file in the portal. Go to
your function app, select Platform features , and then select App Ser vice Editor . By doing so, you can view the entire
file structure of your function app and then make changes.
Proxies.json is defined by a proxies object, which is composed of named proxies and their definitions. Optionally,
if your editor supports it, you can reference a JSON schema for code completion. An example file might look like
the following:
{
"$schema": "http://json.schemastore.org/proxies",
"proxies": {
"proxy1": {
"matchCondition": {
"methods": [ "GET" ],
"route": "/api/{test}"
},
"backendUri": "https://<AnotherApp>.azurewebsites.net/api/<FunctionName>"
}
}
}
Each proxy has a friendly name, such as proxy1 in the preceding example. The corresponding proxy definition
object is defined by the following properties:
matchCondition : Required--an object defining the requests that trigger the execution of this proxy. It
contains two properties that are shared with HTTP triggers:
methods: An array of the HTTP methods that the proxy responds to. If it is not specified, the proxy
responds to all HTTP methods on the route.
route: Required--defines the route template, controlling which request URLs your proxy responds to.
Unlike in HTTP triggers, there is no default value.
backendUri : The URL of the back-end resource to which the request should be proxied. This value can
reference application settings and parameters from the original client request. If this property is not included,
Azure Functions responds with an HTTP 200 OK.
requestOverrides : An object that defines transformations to the back-end request. See Define a
requestOverrides object.
responseOverrides : An object that defines transformations to the client response. See Define a
responseOverrides object.
NOTE
The route property in Azure Functions Proxies does not honor the routePrefix property of the Function App host
configuration. If you want to include a prefix such as /api , it must be included in the route property.
{
"$schema": "http://json.schemastore.org/proxies",
"proxies": {
"Root": {
"disabled":true,
"matchCondition": {
"route": "/example"
},
"backendUri": "https://<AnotherApp>.azurewebsites.net/api/<FunctionName>"
}
}
}
Application Settings
The proxy behavior can be controlled by several app settings. They are all outlined in the Functions App Settings
reference
AZURE_FUNCTION_PROXY_DISABLE_LOCAL_CALL
AZURE_FUNCTION_PROXY_BACKEND_URL_DECODE_SLASHES
Reserved Characters (string formatting)
Proxies read all strings out of a JSON file, using \ as an escape symbol. Proxies also interpret curly braces. See a
full set of examples below.
C H A RA C T ER ESC A P ED C H A RA C T ER EXA M P L E
\ \\ example.com\\text.html -->
example.com\text.html
{
"$schema": "http://json.schemastore.org/proxies",
"proxies": {
"proxy1": {
"matchCondition": {
"methods": [ "GET" ],
"route": "/api/{test}"
},
"backendUri": "https://<AnotherApp>.azurewebsites.net/api/<FunctionName>",
"requestOverrides": {
"backend.request.headers.Accept": "application/xml",
"backend.request.headers.x-functions-key": "%ANOTHERAPP_API_KEY%"
}
}
}
}
{
"$schema": "http://json.schemastore.org/proxies",
"proxies": {
"proxy1": {
"matchCondition": {
"methods": [ "GET" ],
"route": "/api/{test}"
},
"responseOverrides": {
"response.body": "Hello, {test}",
"response.headers.Content-Type": "text/plain"
}
}
}
}
NOTE
In this example, the response body is set directly, so no backendUri property is needed. The example shows how you
might use Azure Functions Proxies for mocking APIs.
Azure Functions networking options
8/2/2022 • 25 minutes to read • Edit Online
This article describes the networking features available across the hosting options for Azure Functions. All the
following networking options give you some ability to access resources without using internet-routable
addresses or to restrict internet access to a function app.
The hosting models have different levels of network isolation available. Choosing the correct one helps you
meet your network isolation requirements.
You can host function apps in a couple of ways:
You can choose from plan options that run on a multitenant infrastructure, with various levels of virtual
network connectivity and scaling options:
The Consumption plan scales dynamically in response to load and offers minimal network isolation
options.
The Premium plan also scales dynamically and offers more comprehensive network isolation.
The Azure App Service plan operates at a fixed scale and offers network isolation similar to the
Premium plan.
You can run functions in an App Service Environment. This method deploys your function into your virtual
network and offers full network control and isolation.
NOTE
With network restrictions in place, you can deploy only from within your virtual network, or when you've put the IP
address of the machine you're using to access the Azure portal on the Safe Recipients list. However, you can still manage
the function using the portal.
You can't use service endpoints to restrict access to apps that run in an App Service Environment. When your
app is in an App Service Environment, you can control access to it by applying IP access rules.
To learn how to set up service endpoints, see Establish Azure Functions private site access.
3. The drop-down list contains all of the Azure Resource Manager virtual networks in your subscription in
the same region. Select the virtual network you want to integrate with.
The Functions Premium Plan only supports regional virtual network integration. If the virtual network
is in the same region, either create a new subnet or select an empty, pre-existing subnet.
To select a virtual network in another region, you must have a virtual network gateway provisioned
with point to site enabled. Virtual network integration across regions is only supported for Dedicated
plans, but global peerings will work with regional virtual network integration.
During the integration, your app is restarted. When integration is finished, you'll see details on the virtual
network you're integrated with. By default, Route All will be enabled, and all traffic will be routed into your
virtual network.
If you wish for only your private traffic (RFC1918 traffic) to be routed, please follow the steps in the app service
documentation.
Regional virtual network integration
Using regional virtual network integration enables your app to access:
Resources in the same virtual network as your app.
Resources in virtual networks peered to the virtual network your app is integrated with.
Service endpoint secured services.
Resources across Azure ExpressRoute connections.
Resources across peered connections, which include Azure ExpressRoute connections.
Private endpoints
When you use regional virtual network integration, you can use the following Azure networking features:
Network security groups (NSGs) : You can block outbound traffic with an NSG that's placed on your
integration subnet. The inbound rules don't apply because you can't use virtual network integration to
provide inbound access to your app.
Route tables (UDRs) : You can place a route table on the integration subnet to send outbound traffic where
you want.
NOTE
When you route all of your outbound traffic into your virtual network, it's subject to the NSGs and UDRs that are applied
to your integration subnet. When virtual network integrated, your function app's outbound traffic to public IP addresses
is still sent from the addresses that are listed in your app properties, unless you provide routes that direct the traffic
elsewhere.
Regional virtual network integration isn't able to use port 25.
M A X H O RIZ O N TA L SC A L E
C IDR B LO C K SIZ E M A X AVA IL A B L E A DDRESSES ( IN STA N C ES) *
/28 11 5
/27 27 13
/26 59 29
*Assumes that you'll need to scale up or down in either size or SKU at some point.
Since subnet size can't be changed after assignment, use a subnet that's large enough to accommodate
whatever scale your app might reach. To avoid any issues with subnet capacity for Functions Premium plans,
you should use a /24 with 256 addresses for Windows and a /26 with 64 addresses for Linux. When creating
subnets in Azure portal as part of integrating with the virtual network, a minimum size of /24 and /26 is
required for Windows and Linux respectively.
When you want your apps in another plan to reach a virtual network that's already connected to by apps in
another plan, select a different subnet than the one being used by the pre-existing virtual network integration.
The feature is fully supported for both Windows and Linux apps, including custom containers. All of the
behaviors act the same between Windows apps and Linux apps.
Network security groups
You can use network security groups to block inbound and outbound traffic to resources in a virtual network. An
app that uses regional virtual network integration can use a network security group to block outbound traffic to
resources in your virtual network or the internet. To block traffic to public addresses, you must have virtual
network integration with Route All enabled. The inbound rules in an NSG don't apply to your app because
virtual network integration affects only outbound traffic from your app.
To control inbound traffic to your app, use the Access Restrictions feature. An NSG that's applied to your
integration subnet is in effect regardless of any routes applied to your integration subnet. If your function app is
virtual network integrated with Route All enabled, and you don't have any routes that affect public address
traffic on your integration subnet, all of your outbound traffic is still subject to NSGs assigned to your
integration subnet. When Route All isn't enabled, NSGs are only applied to RFC1918 traffic.
Routes
You can use route tables to route outbound traffic from your app to wherever you want. By default, route tables
only affect your RFC1918 destination traffic. When Route All is enabled, all of your outbound calls are affected.
When Route All is disabled, only private traffic (RFC1918) is affected by your route tables. Routes that are set on
your integration subnet won't affect replies to inbound app requests. Common destinations can include firewall
devices or gateways.
If you want to route all outbound traffic on-premises, you can use a route table to send all outbound traffic to
your ExpressRoute gateway. If you do route traffic to a gateway, be sure to set routes in the external network to
send any replies back.
Border Gateway Protocol (BGP) routes also affect your app traffic. If you have BGP routes from something like
an ExpressRoute gateway, your app outbound traffic is affected. By default, BGP routes affect only your RFC1918
destination traffic. When your function app is virtual network integrated with Route All enabled, all outbound
traffic can be affected by your BGP routes.
Azure DNS private zones
After your app integrates with your virtual network, it uses the same DNS server that your virtual network is
configured with and will work with the Azure DNS private zones linked to the virtual network.
When you create a function app, you must create or link to a general-purpose Azure Storage account that
supports Blob, Queue, and Table storage. You can replace this storage account with one that is secured with
service endpoints or private endpoints.
This feature is supported for all Windows and Linux virtual network-supported SKUs in the Dedicated (App
Service) plan and for the Premium plans. The Consumption plan isn't supported. To learn how to set up a
function with a storage account restricted to a private network, see Restrict your storage account to a virtual
network.
You can also enable virtual network triggers by using the following Azure CLI command:
TIP
Enabling virtual network triggers may have an impact on the performance of your application since your App Service plan
instances will need to monitor your triggers to determine when to scale. This impact is likely to be very small.
Virtual network triggers are supported in version 2.x and above of the Functions runtime. The following non-
HTTP trigger types are supported.
IMPORTANT
When you enable virtual network trigger support, only the trigger types shown in the previous table scale dynamically
with your application. You can still use triggers that aren't in the table, but they're not scaled beyond their pre-warmed
instance count. For the complete list of triggers, see Triggers and bindings.
App Service plan and App Service Environment with virtual network triggers
When your function app runs in either an App Service plan or an App Service Environment, you can use non-
HTTP trigger functions. For your functions to get triggered correctly, you must be connected to a virtual network
with access to the resource defined in the trigger connection.
For example, assume you want to configure Azure Cosmos DB to accept traffic only from a virtual network. In
this case, you must deploy your function app in an App Service plan that provides virtual network integration
with that virtual network. Integration enables a function to be triggered by that Azure Cosmos DB resource.
Hybrid Connections
Hybrid Connections is a feature of Azure Relay that you can use to access application resources in other
networks. It provides access from your app to an application endpoint. You can't use it to access your application.
Hybrid Connections is available to functions that run on Windows in all but the Consumption plan.
As used in Azure Functions, each hybrid connection correlates to a single TCP host and port combination. This
means that the hybrid connection's endpoint can be on any operating system and any application as long as
you're accessing a TCP listening port. The Hybrid Connections feature doesn't know or care what the application
protocol is or what you're accessing. It just provides network access.
To learn more, see the App Service documentation for Hybrid Connections. These same configuration steps
support Azure Functions.
IMPORTANT
Hybrid Connections is only supported on Windows plans. Linux isn't supported.
Outbound IP restrictions
Outbound IP restrictions are available in a Premium plan, App Service plan, or App Service Environment. You
can configure outbound restrictions for the virtual network where your App Service Environment is deployed.
When you integrate a function app in a Premium plan or an App Service plan with a virtual network, the app
can still make outbound calls to the internet by default. By integrating your function app with a virtual network
with Route All enabled, you force all outbound traffic to be sent into your virtual network, where network
security group rules can be used to restrict traffic.
To learn how to control the outbound IP using a virtual network, see Tutorial: Control Azure Functions outbound
IP with an Azure virtual network NAT gateway.
Automation
The following APIs let you programmatically manage regional virtual network integrations:
Azure CLI : Use the az functionapp vnet-integration commands to add, list, or remove a regional virtual
network integration.
ARM templates : Regional virtual network integration can be enabled by using an Azure Resource Manager
template. For a full example, see this Functions quickstart template.
Troubleshooting
The feature is easy to set up, but that doesn't mean your experience will be problem free. If you encounter
problems accessing your desired endpoint, there are some utilities you can use to test connectivity from the app
console. There are two consoles that you can use. One is the Kudu console, and the other is the console in the
Azure portal. To reach the Kudu console from your app, go to Tools > Kudu . You can also reach the Kudo
console at [sitename].scm.azurewebsites.net. After the website loads, go to the Debug console tab. To get to
the Azure portal-hosted console from your app, go to Tools > Console .
Tools
In native Windows apps, the tools ping , nslookup , and tracer t won't work through the console because of
security constraints (they work in custom Windows containers). To fill the void, two separate tools are added. To
test DNS functionality, we added a tool named nameresolver.exe . The syntax is:
You can use nameresolver to check the hostnames that your app depends on. This way you can test if you have
anything misconfigured with your DNS or perhaps don't have access to your DNS server. You can see the DNS
server that your app uses in the console by looking at the environmental variables WEBSITE_DNS_SERVER and
WEBSITE_DNS_ALT_SERVER.
NOTE
The nameresolver.exe tool currently doesn't work in custom Windows containers.
You can use the next tool to test for TCP connectivity to a host and port combination. This tool is called tcpping
and the syntax is:
The tcpping utility tells you if you can reach a specific host and port. It can show success only if there's an
application listening at the host and port combination, and there's network access from your app to the specified
host and port.
Debug access to virtual network-hosted resources
A number of things can prevent your app from reaching a specific host and port. Most of the time it's one of
these things:
A firewall is in the way. If you have a firewall in the way, you hit the TCP timeout. The TCP timeout is 21
seconds in this case. Use the tcpping tool to test connectivity. TCP timeouts can be caused by many things
beyond firewalls, but start there.
DNS isn't accessible. The DNS timeout is 3 seconds per DNS server. If you have two DNS servers, the
timeout is 6 seconds. Use nameresolver to see if DNS is working. You can't use nslookup, because that
doesn't use the DNS your virtual network is configured with. If inaccessible, you could have a firewall or NSG
blocking access to DNS or it could be down.
If those items don't answer your problems, look first for things like:
Regional vir tual network integration
Is your destination a non-RFC1918 address and you don't have Route All enabled?
Is there an NSG blocking egress from your integration subnet?
If you're going across Azure ExpressRoute or a VPN, is your on-premises gateway configured to route traffic
back up to Azure? If you can reach endpoints in your virtual network but not on-premises, check your routes.
Do you have enough permissions to set delegation on the integration subnet? During regional virtual
network integration configuration, your integration subnet is delegated to Microsoft.Web/serverFarms. The
VNet integration UI delegates the subnet to Microsoft.Web/serverFarms automatically. If your account
doesn't have sufficient networking permissions to set delegation, you'll need someone who can set attributes
on your integration subnet to delegate the subnet. To manually delegate the integration subnet, go to the
Azure Virtual Network subnet UI and set the delegation for Microsoft.Web/serverFarms.
Gateway-required vir tual network integration
Is the point-to-site address range in the RFC 1918 ranges (10.0.0.0-10.255.255.255 / 172.16.0.0-
172.31.255.255 / 192.168.0.0-192.168.255.255)?
Does the gateway show as being up in the portal? If your gateway is down, then bring it back up.
Do certificates show as being in sync, or do you suspect that the network configuration was changed? If your
certificates are out of sync or you suspect that a change was made to your virtual network configuration that
wasn't synced with your ASPs, select Sync Network .
If you're going across a VPN, is the on-premises gateway configured to route traffic back up to Azure? If you
can reach endpoints in your virtual network but not on-premises, check your routes.
Are you trying to use a coexistence gateway that supports both point to site and ExpressRoute? Coexistence
gateways aren't supported with virtual network integration.
Debugging networking issues is a challenge because you can't see what's blocking access to a specific host:port
combination. Some causes include:
You have a firewall up on your host that prevents access to the application port from your point-to-site IP
range. Crossing subnets often requires public access.
Your target host is down.
Your application is down.
You had the wrong IP or hostname.
Your application is listening on a different port than what you expected. You can match your process ID with
the listening port by using "netstat -aon" on the endpoint host.
Your network security groups are configured in such a manner that they prevent access to your application
host and port from your point-to-site IP range.
You don't know what address your app actually uses. It could be any address in the integration subnet or point-
to-site address range, so you need to allow access from the entire address range.
More debug steps include:
Connect to a VM in your virtual network and attempt to reach your resource host:port from there. To test for
TCP access, use the PowerShell command Test-NetConnection . The syntax is:
Bring up an application on a VM and test access to that host and port from the console from your app by
using tcpping .
On-premises resources
If your app can't reach a resource on-premises, check if you can reach the resource from your virtual network.
Use the Test-NetConnection PowerShell command to check for TCP access. If your VM can't reach your on-
premises resource, your VPN or ExpressRoute connection might not be configured properly.
If your virtual network-hosted VM can reach your on-premises system but your app can't, the cause is likely one
of the following reasons:
Your routes aren't configured with your subnet or point-to-site address ranges in your on-premises gateway.
Your network security groups are blocking access for your point-to-site IP range.
Your on-premises firewalls are blocking traffic from your point-to-site IP range.
You're trying to reach a non-RFC 1918 address by using the regional virtual network integration feature.
Deleting the App Service plan or web app before disconnecting the VNet integration
If you deleted the web app or the App Service plan without disconnecting the VNet integration first, you will not
be able to do any update/delete operations on the virtual network or subnet that was used for the integration
with the deleted resource. A subnet delegation 'Microsoft.Web/serverFarms' will remain assigned to your
subnet and will prevent the update/delete operations.
In order to do update/delete the subnet or virtual network again you need to re-create the VNet integration and
then disconnect it:
1. Re-create the App Service plan and web app (it is mandatory to use the exact same web app name as before).
2. Navigate to the 'Networking' blade on the web app and configure the VNet integration.
3. After the VNet integration is configured, select the 'Disconnect' button.
4. Delete the App Service plan or web app.
5. Update/Delete the subnet or virtual network.
If you still encounter issues with the VNet integration after following the steps above, please contact Microsoft
Support.
Next steps
To learn more about networking and Azure Functions:
Follow the tutorial about getting started with virtual network integration
Read the Functions networking FAQ
Learn more about virtual network integration with App Service/Functions
Learn more about virtual networks in Azure
Enable more networking features and control with App Service Environments
Connect to individual on-premises resources without firewall changes by using Hybrid Connections
IP addresses in Azure Functions
8/2/2022 • 5 minutes to read • Edit Online
This article explains the following concepts related to IP addresses of function apps:
Locating the IP addresses currently in use by a function app.
Conditions that cause function app IP addresses to change.
Restricting the IP addresses that can access a function app.
Defining dedicated IP addresses for a function app.
IP addresses are associated with function apps, not with individual functions. Incoming HTTP requests can't use
the inbound IP address to call individual functions; they must use the default domain name
(functionappname.azurewebsites.net) or a custom domain name.
{
"name": "AzureCloud.westeurope",
"id": "AzureCloud.westeurope",
"properties": {
"changeNumber": 9,
"region": "westeurope",
"platform": "Azure",
"systemService": "",
"addressPrefixes": [
"13.69.0.0/17",
"13.73.128.0/18",
... Some IP addresses not shown here
"213.199.180.192/27",
"213.199.183.0/24"
]
}
}
For information about when this file is updated and when the IP addresses change, expand the Details section
of the Download Center page.
IP address restrictions
You can configure a list of IP addresses that you want to allow or deny access to a function app. For more
information, see Azure App Service Static IP Restrictions.
Dedicated IP addresses
There are several strategies to explore when your function app requires static, dedicated IP addresses.
Virtual network NAT gateway for outbound static IP
You can control the IP address of outbound traffic from your functions by using a virtual network NAT gateway
to direct traffic through a static public IP address. You can use this topology when running in a Premium plan or
in a Dedicated (App Service) plan. To learn more, see Tutorial: Control Azure Functions outbound IP with an
Azure virtual network NAT gateway.
App Service Environments
For full control over the IP addresses, both inbound and outbound, we recommend App Service Environments
(the Isolated tier of App Service plans). For more information, see App Service Environment IP addresses and
How to control inbound traffic to an App Service Environment.
To find out if your function app runs in an App Service Environment:
Azure portal
Azure CLI
Azure PowerShell
Every Functions app is executed by a language-specific handler. While Azure Functions features many language
handlers by default, there are cases where you may want to use other languages or runtimes.
Custom handlers are lightweight web servers that receive events from the Functions host. Any language that
supports HTTP primitives can implement a custom handler.
Custom handlers are best suited for situations where you want to:
Implement a function app in a language that's not currently offered out-of-the box, such as Go or Rust.
Implement a function app in a runtime that's not currently featured by default, such as Deno.
With custom handlers, you can use triggers and input and output bindings via extension bundles.
Get started with Azure Functions custom handlers with quickstarts in Go and Rust.
Overview
The following diagram shows the relationship between the Functions host and a web server implemented as a
custom handler.
1. Each event triggers a request sent to the Functions host. An event is any trigger that is supported by Azure
Functions.
2. The Functions host then issues a request payload to the web server. The payload holds trigger and input
binding data and other metadata for the function.
3. The web server executes the individual function, and returns a response payload to the Functions host.
4. The Functions host passes data from the response to the function's output bindings for processing.
An Azure Functions app implemented as a custom handler must configure the host.json, local.settings.json, and
function.json files according to a few conventions.
Application structure
To implement a custom handler, you need the following aspects to your application:
A host.json file at the root of your app
A local.settings.json file at the root of your app
A function.json file for each function (inside a folder that matches the function name)
A command, script, or executable, which runs a web server
The following diagram shows how these files look on the file system for a function named "MyQueueFunction"
and an custom handler executable named handler.exe.
| /MyQueueFunction
| function.json
|
| host.json
| local.settings.json
| handler.exe
Configuration
The application is configured via the host.json and local.settings.json files.
host.json
host.json tells the Functions host where to send requests by pointing to a web server capable of processing
HTTP events.
A custom handler is defined by configuring the host.json file with details on how to run the web server via the
customHandler section.
{
"version": "2.0",
"customHandler": {
"description": {
"defaultExecutablePath": "handler.exe"
}
}
}
The customHandler section points to a target as defined by the defaultExecutablePath . The execution target may
either be a command, executable, or file where the web server is implemented.
Use the arguments array to pass any arguments to the executable. Arguments support expansion of
environment variables (application settings) using %% notation.
You can also change the working directory used by the executable with workingDirectory .
{
"version": "2.0",
"customHandler": {
"description": {
"defaultExecutablePath": "app/handler.exe",
"arguments": [
"--database-connection-string",
"%DATABASE_CONNECTION_STRING%"
],
"workingDirectory": "app"
}
}
}
B i n d i n g s su p p o r t
Standard triggers along with input and output bindings are available by referencing extension bundles in your
host.json file.
local.settings.json
local.settings.json defines application settings used when running the function app locally. As it may contain
secrets, local.settings.json should be excluded from source control. In Azure, use application settings instead.
For custom handlers, set FUNCTIONS_WORKER_RUNTIME to Custom in local.settings.json.
{
"IsEncrypted": false,
"Values": {
"FUNCTIONS_WORKER_RUNTIME": "Custom"
}
}
Function metadata
When used with a custom handler, the function.json contents are no different from how you would define a
function under any other context. The only requirement is that function.json files must be in a folder named to
match the function name.
The following function.json configures a function that has a queue trigger and a queue output binding. Because
it's in a folder named MyQueueFunction, it defines a function named MyQueueFunction.
MyQueueFunction/function.json
{
"bindings": [
{
"name": "myQueueItem",
"type": "queueTrigger",
"direction": "in",
"queueName": "messages-incoming",
"connection": "AzureWebJobsStorage"
},
{
"name": "$return",
"type": "queue",
"direction": "out",
"queueName": "messages-outgoing",
"connection": "AzureWebJobsStorage"
}
]
}
Request payload
When a queue message is received, the Functions host sends an HTTP post request to the custom handler with a
payload in the body.
The following code represents a sample request payload. The payload includes a JSON structure with two
members: Data and Metadata .
The Data member includes keys that match input and trigger names as defined in the bindings array in the
function.json file.
The Metadata member includes metadata generated from the event source.
{
"Data": {
"myQueueItem": "{ message: \"Message sent\" }"
},
"Metadata": {
"DequeueCount": 1,
"ExpirationTime": "2019-10-16T17:58:31+00:00",
"Id": "800ae4b3-bdd2-4c08-badd-f08e5a34b865",
"InsertionTime": "2019-10-09T17:58:31+00:00",
"NextVisibleTime": "2019-10-09T18:08:32+00:00",
"PopReceipt": "AgAAAAMAAAAAAAAAAgtnj8x+1QE=",
"sys": {
"MethodName": "QueueTrigger",
"UtcNow": "2019-10-09T17:58:32.2205399Z",
"RandGuid": "24ad4c06-24ad-4e5b-8294-3da9714877e9"
}
}
}
Response payload
By convention, function responses are formatted as key/value pairs. Supported keys include:
Examples
Custom handlers can be implemented in any language that supports receiving HTTP events. The following
examples show how to implement a custom handler using the Go programming language.
Function with bindings
The scenario implemented in this example features a function named order that accepts a POST with a payload
representing a product order. As an order is posted to the function, a Queue Storage message is created and an
HTTP response is returned.
Implementation
In a folder named order, the function.json file configures the HTTP-triggered function.
order/function.json
{
"bindings": [
{
"type": "httpTrigger",
"direction": "in",
"name": "req",
"methods": ["post"]
},
{
"type": "http",
"direction": "out",
"name": "res"
},
{
"type": "queue",
"name": "message",
"direction": "out",
"queueName": "orders",
"connection": "AzureWebJobsStorage"
}
]
}
This function is defined as an HTTP triggered function that returns an HTTP response and outputs a Queue
storage message.
At the root of the app, the host.json file is configured to run an executable file named handler.exe ( handler in
Linux or macOS).
{
"version": "2.0",
"customHandler": {
"description": {
"defaultExecutablePath": "handler.exe"
}
},
"extensionBundle": {
"id": "Microsoft.Azure.Functions.ExtensionBundle",
"version": "[1.*, 2.0.0)"
}
}
{
"id": 1005,
"quantity": 2,
"color": "black"
}
The Functions runtime will then send the following HTTP request to the custom handler:
{
"Data": {
"req": {
"Url": "http://localhost:7071/api/order",
"Method": "POST",
"Query": "{}",
"Headers": {
"Content-Type": [
"application/json"
]
},
"Params": {},
"Body": "{\"id\":1005,\"quantity\":2,\"color\":\"black\"}"
}
},
"Metadata": {
}
}
NOTE
Some portions of the payload were removed for brevity.
handler.exe is the compiled Go custom handler program that runs a web server and responds to function
invocation requests from the Functions host.
package main
import (
"encoding/json"
"fmt"
"log"
"net/http"
"os"
)
d := json.NewDecoder(r.Body)
d.Decode(&invokeRequest)
outputs := make(map[string]interface{})
outputs["message"] = reqData["Body"]
resData := make(map[string]interface{})
resData["body"] = "Order enqueued"
outputs["res"] = resData
invokeResponse := InvokeResponse{outputs, nil, nil}
responseJson, _ := json.Marshal(invokeResponse)
w.Header().Set("Content-Type", "application/json")
w.Write(responseJson)
}
func main() {
customHandlerPort, exists := os.LookupEnv("FUNCTIONS_CUSTOMHANDLER_PORT")
if !exists {
customHandlerPort = "8080"
}
mux := http.NewServeMux()
mux.HandleFunc("/order", orderHandler)
fmt.Println("Go server Listening on: ", customHandlerPort)
log.Fatal(http.ListenAndServe(":"+customHandlerPort, mux))
}
In this example, the custom handler runs a web server to handle HTTP events and is set to listen for requests via
the FUNCTIONS_CUSTOMHANDLER_PORT .
Even though the Functions host received original HTTP request at /api/order , it invokes the custom handler
using the function name (its folder name). In this example, the function is defined at the path of /order . The
host sends the custom handler an HTTP request at the path of /order .
As POST requests are sent to this function, the trigger data and function metadata are available via the HTTP
request body. The original HTTP request body can be accessed in the payload's Data.req.Body .
The function's response is formatted into key/value pairs where the Outputs member holds a JSON value
where the keys match the outputs as defined in the function.json file.
This is an example payload that this handler returns to the Functions host.
{
"Outputs": {
"message": "{\"id\":1005,\"quantity\":2,\"color\":\"black\"}",
"res": {
"body": "Order enqueued"
}
},
"Logs": null,
"ReturnValue": null
}
By setting the message output equal to the order data that came in from the request, the function outputs that
order data to the configured queue. The Functions host also returns the HTTP response configured in res to the
caller.
HTTP-only function
For HTTP-triggered functions with no additional bindings or outputs, you may want your handler to work
directly with the HTTP request and response instead of the custom handler request and response payloads. This
behavior can be configured in host.json using the enableForwardingHttpRequest setting.
IMPORTANT
The primary purpose of the custom handlers feature is to enable languages and runtimes that do not currently have first-
class support on Azure Functions. While it may be possible to run web applications using custom handlers, Azure
Functions is not a standard reverse proxy. Some features such as response streaming, HTTP/2, and WebSockets are not
available. Some components of the HTTP request such as certain headers and routes may be restricted. Your application
may also experience excessive cold start.
To address these circumstances, consider running your web apps on Azure App Service.
The following example demonstrates how to configure an HTTP-triggered function with no additional bindings
or outputs. The scenario implemented in this example features a function named hello that accepts a GET or
POST .
Implementation
In a folder named hello, the function.json file configures the HTTP-triggered function.
hello/function.json
{
"bindings": [
{
"type": "httpTrigger",
"authLevel": "anonymous",
"direction": "in",
"name": "req",
"methods": ["get", "post"]
},
{
"type": "http",
"direction": "out",
"name": "res"
}
]
}
The function is configured to accept both GET and POST requests and the result value is provided via an
argument named res .
At the root of the app, the host.json file is configured to run handler.exe and enableForwardingHttpRequest is set
to true .
{
"version": "2.0",
"customHandler": {
"description": {
"defaultExecutablePath": "handler.exe"
},
"enableForwardingHttpRequest": true
}
}
When enableForwardingHttpRequest is true , the behavior of HTTP-only functions differs from the default
custom handlers behavior in these ways:
The HTTP request does not contain the custom handlers request payload. Instead, the Functions host invokes
the handler with a copy of the original HTTP request.
The Functions host invokes the handler with the same path as the original request including any query string
parameters.
The Functions host returns a copy of the handler's HTTP response as the response to the original request.
The following is a POST request to the Functions host. The Functions host then sends a copy of the request to
the custom handler at the same path.
{
"message": "Hello World!"
}
The file handler.go file implements a web server and HTTP function.
package main
import (
"fmt"
"io/ioutil"
"log"
"net/http"
"os"
)
func main() {
customHandlerPort, exists := os.LookupEnv("FUNCTIONS_CUSTOMHANDLER_PORT")
if !exists {
customHandlerPort = "8080"
}
mux := http.NewServeMux()
mux.HandleFunc("/api/hello", helloHandler)
fmt.Println("Go server Listening on: ", customHandlerPort)
log.Fatal(http.ListenAndServe(":"+customHandlerPort, mux))
}
In this example, the custom handler creates a web server to handle HTTP events and is set to listen for requests
via the FUNCTIONS_CUSTOMHANDLER_PORT .
GET requests are handled by returning a string, and POST requests have access to the request body.
The route for the order function here is /api/hello , same as the original request.
NOTE
The FUNCTIONS_CUSTOMHANDLER_PORT is not the public facing port used to call the function. This port is used by the
Functions host to call the custom handler.
Deploying
A custom handler can be deployed to every Azure Functions hosting option. If your handler requires operating
system or platform dependencies (such as a language runtime), you may need to use a custom container.
When creating a function app in Azure for custom handlers, we recommend you select .NET Core as the stack. A
"Custom" stack for custom handlers will be added in the future.
To deploy a custom handler app using Azure Functions Core Tools, run the following command.
Restrictions
The custom handler web server needs to start within 60 seconds.
Samples
Refer to the custom handler samples GitHub repo for examples of how to implement functions in a variety of
different languages.
{
"version": "2.0",
"customHandler": {
"description": {
"defaultExecutablePath": "handler.exe"
}
},
"logging": {
"logLevel": {
"default": "Trace"
}
}
}
The Functions host outputs extra log messages including information related to the custom handler process. Use
the logs to investigate problems starting your custom handler process or invoking functions in your custom
handler.
Locally, logs are printed to the console.
In Azure, query Application Insights traces to view the log messages. If your app produces a high volume of logs,
only a subset of log messages are sent to Application Insights. Disable sampling to ensure all messages are
logged.
Test custom handler in isolation
Custom handler apps are a web server process, so it may be helpful to start it on its own and test function
invocations by sending mock HTTP requests using a tool like cURL or Postman.
You can also use this strategy in your CI/CD pipelines to run automated tests on your custom handler.
Execution environment
Custom handlers run in the same environment as a typical Azure Functions app. Test your handler to ensure the
environment contains all the dependencies it needs to run. For apps that require additional dependencies, you
may need to run them using a custom container image hosted on Azure Functions Premium plan.
Get support
If you need help on a function app with custom handlers, you can submit a request through regular support
channels. However, due to the wide variety of possible languages used to build custom handlers apps, support is
not unlimited.
Support is available if the Functions host has problems starting or communicating with the custom handler
process. For problems specific to the inner workings of your custom handler process, such as issues with the
chosen language or framework, our Support Team is unable to provide assistance in this context.
Next steps
Get started building an Azure Functions app in Go or Rust with the custom handlers quickstart.
Azure Functions support for availability zone
redundancy
8/2/2022 • 5 minutes to read • Edit Online
Availability zone (AZ) support for Azure Functions is now available on Premium (Elastic Premium) and
Dedicated (App Service) plans. A zone-redundant Functions application automatically balances its instances
between availability zones for higher availability. This article focuses on zone redundancy support for Premium
plans. For zone redundancy on Dedicated plans, refer here.
IMPORTANT
Azure Functions runs on the Azure App Service platform. In the App Service platform, plans that host Premium plan
function apps are referred to as Elastic Premium plans, with SKU names like EP1 . If you choose to run your function app
on a Premium plan, make sure to create a plan with an SKU name that starts with "E", such as EP1 . App Service plan
SKU names that start with "P", such as P1V2 (Premium V2 Small plan), are actually Dedicated hosting plans. Because
they are Dedicated and not Elastic Premium, plans with SKU names starting with "P" won't scale dynamically and may
increase your costs.
Overview
An availability zone is a high-availability offering that protects your applications and data from datacenter
failures. Availability zones are unique physical locations within an Azure region. Each zone comprises one or
more datacenters equipped with independent power, cooling, and networking. To ensure resiliency, there's a
minimum of three separate zones in all enabled regions. You can build high-availability into your application
architecture by co-locating your compute, storage, networking, and data resources within a zone and replicating
into other zones.
A zone redundant function app automatically distributes the instances your app runs on between the availability
zones in the region. For apps running in a zone-redundant Premium plan, even as the app scales in and out, the
instances the app is running on are still evenly distributed between availability zones.
Requirements
When hosting in a zone-redundant Premium plan, the following requirements must be met.
You must use a zone redundant storage account (ZRS) for your function app's storage account. If you use a
different type of storage account, Functions may show unexpected behavior during a zonal outage.
Both Windows and Linux are supported.
Must be hosted on an Elastic Premium or Dedicated hosting plan. Instructions on zone redundancy with
Dedicated (App Service) hosting plan can be found in this article.
Availability zone (AZ) support isn't currently available for function apps on Consumption plans.
Zone redundant plans must specify a minimum instance count of three.
Function apps hosted on a Premium plan must also have a minimum always ready instances count of three.
Zone-redundant Premium plans can currently be enabled in any of the following regions:
West US 2
West US 3
Central US
South Central US
East US
East US 2
Canada Central
Brazil South
North Europe
West Europe
Germany West Central
France Central
UK South
Japan East
Southeast Asia
Australia East
1. Open the Azure portal and navigate to the Create Function App page. Information on creating a
function app in the portal can be found here.
2. In the Basics page, fill out the fields for your function app. Pay special attention to the fields in the table
below (also highlighted in the screenshot below), which have specific requirements for zone redundancy.
Next steps
Improve the performance and reliability of Azure Functions
Supported languages in Azure Functions
8/2/2022 • 3 minutes to read • Edit Online
This article explains the levels of support offered for languages that you can use with Azure Functions. It also
describes strategies for creating functions using languages not natively supported.
Levels of support
There are two levels of support:
Generally available (GA) - Fully supported and approved for production use.
Preview - Not yet supported, but expected to reach GA status in the future.
L A N GUA GE 1. X 2. X 3. X 4. X
C# GA (.NET Framework GA (.NET Core 2.11 ) GA (.NET Core 3.1) GA (.NET 6.0)
4.8) GA (.NET 5.0) Preview (.NET 7)
Preview (.NET
Framework 4.8)
F# GA (.NET Framework GA (.NET Core 2.11 ) GA (.NET Core 3.1) GA (.NET 6.0)
4.8)
Python N/A GA (Python 3.7 & GA (Python 3.9, 3.8, GA (Python 3.9, 3.8,
3.6) 3.7, & 3.6) 3.7)
TypeScript2 N/A GA GA GA
1 .NET class library apps targeting runtime version 2.x runs on .NET Core 3.1 in .NET Core 2.x compatibility
mode. To learn more, see Functions v2.x considerations.
2 Supported through transpiling to JavaScript.
See the language-specific developer guide article for more details about supported language versions.
For information about planned changes to language support, see Azure roadmap.
C# script .NET ✓ ✓ ✓
JavaScript Node.js ✓ ✓ ✓
Python Python ✓ ✓
Java Java ✓ ✓
TypeScript Node.js ✓ ✓
1 In the portal, you can't currently create function apps that run on .NET 5.0. For more information on .NET 5
functions, see Develop and publish .NET 5 functions using Azure Functions.
For more information on operating system and language support, see Operating system/runtime support.
When in-portal editing isn't available, you must instead develop your functions locally.
Language major version support
Azure Functions provides a guarantee of support for the major versions of supported programming languages.
For most languages, there are minor or patch versions released to update a supported major version. Examples
of minor or patch versions include such as Python 3.9.1 and Node 14.17. After new minor versions of supported
languages become available, the minor versions used by your functions apps are automatically upgraded to
these newer minor or patch versions.
NOTE
Because Azure Functions can remove the support of older minor versions at any time after a new minor version is
available, you shouldn't pin your function apps to a specific minor/patch version of a programming language.
Custom handlers
Custom handlers are lightweight web servers that receive events from the Azure Functions host. Any language
that supports HTTP primitives can implement a custom handler. This means that custom handlers can be used to
create functions in languages that aren't officially supported. To learn more, see Azure Functions custom
handlers.
Language extensibility
Starting with version 2.x, the runtime is designed to offer language extensibility. The JavaScript and Java
languages in the 2.x runtime are built with this extensibility.
Next steps
To learn more about how to develop functions in the supported languages, see the following resources:
C# class library developer reference
C# script developer reference
Java developer reference
JavaScript developer reference
PowerShell developer reference
Python developer reference
TypeScript developer reference
Develop C# class library functions using Azure
Functions
8/2/2022 • 20 minutes to read • Edit Online
This article is an introduction to developing Azure Functions by using C# in .NET class libraries.
IMPORTANT
This article supports .NET class library functions that run in-process with the runtime. Your C# functions can also run out-
of-process and isolated from the Functions runtime. The isolated model is the only way to run .NET 5.x and the preview of
.NET Framework 4.8 using recent versions of the Functions runtime. To learn more, see .NET isolated process functions.
Azure Functions supports C# and C# script programming languages. If you're looking for guidance on using C#
in the Azure portal, see C# script (.csx) developer reference.
Supported versions
Versions of the Functions runtime work with specific versions of .NET. To learn more about Functions versions,
see Azure Functions runtime versions overview. Version support depends on whether your functions run in-
process or out-of-process (isolated).
NOTE
To learn how to change the Functions runtime version used by your function app, see view and update the current
runtime version.
The following table shows the highest level of .NET Core or .NET Framework that can be used with a specific
version of Functions.
IN - P RO C ESS O UT - O F - P RO C ESS
F UN C T IO N S RUN T IM E VERSIO N ( . N ET C L A SS L IB RA RY ) ( . N ET ISO L AT ED )
1 Build process also requires .NET 6 SDK. Support for .NET Framework 4.8 is in preview.
2 Build process also requires .NET Core 3.1 SDK.
3 For details, see Functions v2.x considerations.
For the latest news about Azure Functions releases, including the removal of specific older minor versions,
monitor Azure App Service announcements.
Functions v2.x considerations
Function apps that target the latest 2.x version ( ~2 ) are automatically upgraded to run on .NET Core 3.1.
Because of breaking changes between .NET Core versions, not all apps developed and compiled against .NET
Core 2.2 can be safely upgraded to .NET Core 3.1. You can opt out of this upgrade by pinning your function app
to ~2.0 . Functions also detects incompatible APIs and may pin your app to ~2.0 to prevent incorrect execution
on .NET Core 3.1.
NOTE
If your function app is pinned to ~2.0 and you change this version target to ~2 , your function app may break. If you
deploy using ARM templates, check the version in your templates. If this occurs, change your version back to target
~2.0 and fix compatibility issues.
Function apps that target ~2.0 continue to run on .NET Core 2.2. This version of .NET Core no longer receives
security and other maintenance updates. To learn more, see this announcement page.
You should work to make your functions compatible with .NET Core 3.1 as soon as possible. After you've
resolved these issues, change your version back to ~2 or upgrade to ~3 . To learn more about targeting
versions of the Functions runtime, see How to target Azure Functions runtime versions.
When running on Linux in a Premium or dedicated (App Service) plan, you pin your version by instead targeting
a specific image by setting the linuxFxVersion site config setting to
DOCKER|mcr.microsoft.com/azure-functions/dotnet:2.0.14786-appservice To learn how to set linuxFxVersion , see
Manual version updates on Linux.
This directory is what gets deployed to your function app in Azure. The binding extensions required in version
2.x of the Functions runtime are added to the project as NuGet packages.
IMPORTANT
The build process creates a function.json file for each function. This function.json file is not meant to be edited directly. You
can't change binding configuration or disable the function by editing this file. To learn how to disable a function, see How
to disable functions.
The FunctionName attribute marks the method as a function entry point. The name must be unique within a
project, start with a letter and only contain letters, numbers, _ , and - , up to 127 characters in length. Project
templates often create a method named Run , but the method name can be any valid C# method name.
The trigger attribute specifies the trigger type and binds input data to a method parameter. The example
function is triggered by a queue message, and the queue message is passed to the method in the myQueueItem
parameter.
Values assigned to output bindings are written when the function exits. You can use more than one output
binding in a function by simply assigning values to multiple output parameters.
The binding reference articles (Storage queues, for example) explain which parameter types you can use with
trigger, input, or output binding attributes.
Binding expressions example
The following code gets the name of the queue to monitor from an app setting, and it gets the queue message
creation time in the insertionTime parameter.
Autogenerated function.json
The build process creates a function.json file in a function folder in the build folder. As noted earlier, this file is
not meant to be edited directly. You can't change binding configuration or disable the function by editing this
file.
The purpose of this file is to provide information to the scale controller to use for scaling decisions on the
Consumption plan. For this reason, the file only has trigger info, not input/output bindings.
The generated function.json file includes a configurationSource property that tells the runtime to use .NET
attributes for bindings, rather than function.json configuration. Here's an example:
{
"generatedBy": "Microsoft.NET.Sdk.Functions-1.0.0.0",
"configurationSource": "attributes",
"bindings": [
{
"type": "queueTrigger",
"queueName": "%input-queue-name%",
"name": "myQueueItem"
}
],
"disabled": false,
"scriptFile": "..\\bin\\FunctionApp1.dll",
"entryPoint": "FunctionApp1.QueueTrigger.Run"
}
Microsoft.NET.Sdk.Functions
The function.json file generation is performed by the NuGet package Microsoft.NET.Sdk.Functions.
The same package is used for both version 1.x and 2.x of the Functions runtime. The target framework is what
differentiates a 1.x project from a 2.x project. Here are the relevant parts of the .csproj files, showing different
target frameworks with the same Sdk package:
v2.x+
v1.x
<PropertyGroup>
<TargetFramework>netcoreapp2.1</TargetFramework>
<AzureFunctionsVersion>v2</AzureFunctionsVersion>
</PropertyGroup>
<ItemGroup>
<PackageReference Include="Microsoft.NET.Sdk.Functions" Version="1.0.8" />
</ItemGroup>
Among the Sdk package dependencies are triggers and bindings. A 1.x project refers to 1.x triggers and
bindings because those triggers and bindings target the .NET Framework, while 2.x triggers and bindings target
.NET Core.
The Sdk package also depends on Newtonsoft.Json, and indirectly on WindowsAzure.Storage. These
dependencies make sure that your project uses the versions of those packages that work with the Functions
runtime version that the project targets. For example, Newtonsoft.Json has version 11 for .NET Framework 4.6.1,
but the Functions runtime that targets .NET Framework 4.6.1 is only compatible with Newtonsoft.Json 9.0.1. So
your function code in that project also has to use Newtonsoft.Json 9.0.1.
The source code for Microsoft.NET.Sdk.Functions is available in the GitHub repo azure-functions-vs-build-sdk.
ReadyToRun
You can compile your function app as ReadyToRun binaries. ReadyToRun is a form of ahead-of-time compilation
that can improve startup performance to help reduce the impact of cold-start when running in a Consumption
plan.
ReadyToRun is available in .NET 3.0 and requires version 3.0 of the Azure Functions runtime.
To compile your project as ReadyToRun, update your project file by adding the <PublishReadyToRun> and
<RuntimeIdentifier> elements. The following is the configuration for publishing to a Windows 32-bit function
app.
<PropertyGroup>
<TargetFramework>netcoreapp3.1</TargetFramework>
<AzureFunctionsVersion>v3</AzureFunctionsVersion>
<PublishReadyToRun>true</PublishReadyToRun>
<RuntimeIdentifier>win-x86</RuntimeIdentifier>
</PropertyGroup>
IMPORTANT
ReadyToRun currently doesn't support cross-compilation. You must build your app on the same platform as the
deployment target. Also, pay attention to the "bitness" that is configured in your function app. For example, if your
function app in Azure is Windows 64-bit, you must compile your app on Windows with win-x64 as the runtime
identifier.
You can also build your app with ReadyToRun from the command line. For more information, see the
-p:PublishReadyToRun=true option in dotnet publish .
TIP
If you plan to use the HTTP or WebHook bindings, plan to avoid port exhaustion that can be caused by improper
instantiation of HttpClient . For more information, see How to manage connections in Azure Functions.
Async
To make a function asynchronous, use the async keyword and return a Task object.
You can't use out parameters in async functions. For output bindings, use the function return value or a
collector object instead.
Cancellation tokens
A function can accept a CancellationToken parameter, which enables the operating system to notify your code
when the function is about to be terminated. You can use this notification to make sure the function doesn't
terminate unexpectedly in a way that leaves data in an inconsistent state.
Consider the case when you have a function that processes messages in batches. The following Azure Service
Bus-triggered function processes an array of Message objects, which represents a batch of incoming messages
to be processed by a specific function invocation:
using Microsoft.Azure.ServiceBus;
using System.Threading;
namespace ServiceBusCancellationToken
{
public static class servicebus
{
[FunctionName("servicebus")]
public static void Run([ServiceBusTrigger("csharpguitar", Connection = "SB_CONN")]
Message[] messages, CancellationToken cancellationToken, ILogger log)
{
try
{
foreach (var message in messages)
{
if (cancellationToken.IsCancellationRequested)
{
log.LogInformation("A cancellation token was received. Taking precautionary
actions.");
//Take precautions like noting how far along you are with processing the batch
log.LogInformation("Precautionary activities --complete--.");
break;
}
else
{
//business logic as usual
log.LogInformation($"Message: {message} was processed.");
}
}
}
catch (Exception ex)
{
log.LogInformation($"Something unexpected happened: {ex.Message}");
}
}
}
}
Logging
In your function code, you can write output to logs that appear as traces in Application Insights. The
recommended way to write to the logs is to include a parameter of type ILogger, which is typically named log .
Version 1.x of the Functions runtime used TraceWriter , which also writes to Application Insights, but doesn't
support structured logging. Don't use Console.Write to write your logs, since this data isn't captured by
Application Insights.
ILogger
In your function definition, include an ILogger parameter, which supports structured logging.
With an ILogger object, you call Log<level> extension methods on ILogger to create logs. The following code
writes Information logs with category Function.<YOUR_FUNCTION_NAME>.User. :
To learn more about how Functions implements ILogger , see Collecting telemetry data. Categories prefixed
with Function assume you are using an ILogger instance. If you choose to instead use an ILogger<T> , the
category name may instead be based on T .
Structured logging
The order of placeholders, not their names, determines which parameters are used in the log message. Suppose
you have the following code:
If you keep the same message string and reverse the order of the parameters, the resulting message text would
have the values in the wrong places.
Placeholders are handled this way so that you can do structured logging. Application Insights stores the
parameter name-value pairs and the message string. The result is that the message arguments become fields
that you can query on.
If your logger method call looks like the previous example, you can query the field
customDimensions.prop__rowKey . The prop__ prefix is added to ensure there are no collisions between fields the
runtime adds and fields your function code adds.
You can also query on the original message string by referencing the field
customDimensions.prop__{OriginalFormat} .
{
"customDimensions": {
"prop__{OriginalFormat}":"C# Queue trigger function processed: {message}",
"Category":"Function",
"LogLevel":"Information",
"prop__message":"c9519cbf-b1e6-4b9b-bf24-cb7d10b1bb89"
}
}
Command
PowerShell
In this command, replace <VERSION> with a version of this package that supports your installed version of
Microsoft.Azure.WebJobs.
The following C# examples uses the custom telemetry API. The example is for a .NET class library, but the
Application Insights code is the same for C# script.
v2.x+
v1.x
Version 2.x and later versions of the runtime use newer features in Application Insights to automatically
correlate telemetry with the current operation. There's no need to manually set the operation Id , ParentId , or
Name fields.
using System;
using System.Threading.Tasks;
using Microsoft.AspNetCore.Mvc;
using Microsoft.Azure.WebJobs;
using Microsoft.Azure.WebJobs.Extensions.Http;
using Microsoft.AspNetCore.Http;
using Microsoft.Extensions.Logging;
using Microsoft.ApplicationInsights;
using Microsoft.ApplicationInsights.DataContracts;
using Microsoft.ApplicationInsights.Extensibility;
using System.Linq;
namespace functionapp0915
{
public class HttpTrigger2
{
private readonly TelemetryClient telemetryClient;
/// Using dependency injection will guarantee that you use the same configuration for telemetry
collected automatically and manually.
public HttpTrigger2(TelemetryConfiguration telemetryConfiguration)
{
this.telemetryClient = new TelemetryClient(telemetryConfiguration);
}
[FunctionName("HttpTrigger2")]
public Task<IActionResult> Run(
[HttpTrigger(AuthorizationLevel.Anonymous, "get", Route = null)]
HttpRequest req, ExecutionContext context, ILogger log)
{
log.LogInformation("C# HTTP trigger function processed a request.");
DateTime start = DateTime.UtcNow;
Testing functions
The following articles show how to run an in-process C# class library function locally for testing purposes:
Visual Studio
Visual Studio Code
Command line
Environment variables
To get an environment variable or an app setting value, use System.Environment.GetEnvironmentVariable , as
shown in the following code example:
App settings can be read from environment variables both when developing locally and when running in Azure.
When developing locally, app settings come from the Values collection in the local.settings.json file. In both
environments, local and Azure, GetEnvironmentVariable("<app setting name>") retrieves the value of the named
app setting. For instance, when you're running locally, "My Site Name" would be returned if your
local.settings.json file contains { "Values": { "WEBSITE_SITE_NAME": "My Site Name" } } .
The System.Configuration.ConfigurationManager.AppSettings property is an alternative API for getting app
setting values, but we recommend that you use GetEnvironmentVariable as shown here.
Binding at runtime
In C# and other .NET languages, you can use an imperative binding pattern, as opposed to the declarative
bindings in attributes. Imperative binding is useful when binding parameters need to be computed at runtime
rather than design time. With this pattern, you can bind to supported input and output bindings on-the-fly in
your function code.
Define an imperative binding as follows:
Do not include an attribute in the function signature for your desired imperative bindings.
Pass in an input parameter Binder binder or IBinder binder .
Use the following C# pattern to perform the data binding.
BindingTypeAttribute is the .NET attribute that defines your binding, and T is an input or output type
that's supported by that binding type. T cannot be an out parameter type (such as out JObject ). For
example, the Mobile Apps table output binding supports six output types, but you can only use
ICollector<T> or IAsyncCollector<T> with imperative binding.
Single attribute example
The following example code creates a Storage blob output binding with blob path that's defined at run time,
then writes a string to the blob.
BlobAttribute defines the Storage blob input or output binding, and TextWriter is a supported output binding
type.
Multiple attributes example
The preceding example gets the app setting for the function app's main Storage account connection string
(which is AzureWebJobsStorage ). You can specify a custom app setting to use for the Storage account by adding
the StorageAccountAttribute and passing the attribute array into BindAsync<T>() . Use a Binder parameter, not
IBinder . For example:
public static class IBinderExampleMultipleAttributes
{
[FunctionName("CreateBlobInDifferentStorageAccount")]
public async static Task RunAsync(
[QueueTrigger("myqueue-items-source-binder2")] string myQueueItem,
Binder binder,
ILogger log)
{
log.LogInformation($"CreateBlobInDifferentStorageAccount function processed: {myQueueItem}");
var attributes = new Attribute[]
{
new BlobAttribute($"samples-output/{myQueueItem}", FileAccess.Write),
new StorageAccountAttribute("MyStorageAccount")
};
using (var writer = await binder.BindAsync<TextWriter>(attributes))
{
await writer.WriteAsync("Hello World!!");
}
}
}
2. X A N D
TYPE 1. X H IGH ER 1 T RIGGER IN P UT O UT P UT
Blob storage ✔ ✔ ✔ ✔ ✔
Azure Cosmos ✔ ✔ ✔ ✔ ✔
DB
Azure SQL ✔ ✔ ✔
(preview)
Dapr3 ✔ ✔ ✔ ✔
Event Grid ✔ ✔ ✔ ✔
Event Hubs ✔ ✔ ✔ ✔
HTTP & ✔ ✔ ✔ ✔
webhooks
IoT Hub ✔ ✔ ✔
Kafka2 ✔ ✔ ✔
Mobile Apps ✔ ✔ ✔
Notification ✔ ✔
Hubs
Queue storage ✔ ✔ ✔ ✔
RabbitMQ2 ✔ ✔ ✔
2. X A N D
TYPE 1. X H IGH ER T RIGGER IN P UT O UT P UT
SendGrid ✔ ✔ ✔
Service Bus ✔ ✔ ✔ ✔
SignalR ✔ ✔ ✔ ✔
Table storage ✔ ✔ ✔ ✔
Timer ✔ ✔ ✔
Twilio ✔ ✔ ✔
1 Starting with the version 2.x runtime, all bindings except HTTP and Timer must be registered. See Register
binding extensions.
2 Triggers aren't supported in the Consumption plan. Requires runtime-driven triggers.
3 Supported only in Kubernetes, IoT Edge, and other self-hosted modes only.
Next steps
Learn more about triggers and bindings
Learn more about best practices for Azure Functions
Guide for running C# Azure Functions in an isolated
process
8/2/2022 • 14 minutes to read • Edit Online
This article is an introduction to using C# to develop .NET isolated process functions, which run out-of-process in
Azure Functions. Running out-of-process lets you decouple your function code from the Azure Functions
runtime. Isolated process C# functions run on .NET 6.0, .NET 7.0, and .NET Framework 4.8 (preview support). In-
process C# class library functions aren't supported on .NET 7.0.
GET T IN G STA RT ED C O N C EP T S SA M P L ES
Supported versions
Versions of the Functions runtime work with specific versions of .NET. To learn more about Functions versions,
see Azure Functions runtime versions overview. Version support depends on whether your functions run in-
process or out-of-process (isolated).
NOTE
To learn how to change the Functions runtime version used by your function app, see view and update the current
runtime version.
The following table shows the highest level of .NET Core or .NET Framework that can be used with a specific
version of Functions.
IN - P RO C ESS O UT - O F - P RO C ESS
F UN C T IO N S RUN T IM E VERSIO N ( . N ET C L A SS L IB RA RY ) ( . N ET ISO L AT ED )
1 Build process also requires .NET 6 SDK. Support for .NET Framework 4.8 is in preview.
2 Build process also requires .NET Core 3.1 SDK.
3 For details, see Functions v2.x considerations.
For the latest news about Azure Functions releases, including the removal of specific older minor versions,
monitor Azure App Service announcements.
NOTE
To be able to publish your isolated function project to either a Windows or a Linux function app in Azure, you must set a
value of dotnet-isolated in the remote FUNCTIONS_WORKER_RUNTIME application setting. To support zip
deployment and running from the deployment package on Linux, you also need to update the linuxFxVersion site
config setting to DOTNET-ISOLATED|7.0 . To learn more, see Manual version updates on Linux.
Package references
When your functions run out-of-process, your .NET project uses a unique set of packages, which implement both
core functionality and binding extensions.
Core packages
The following packages are required to run your .NET functions in an isolated process:
Microsoft.Azure.Functions.Worker
Microsoft.Azure.Functions.Worker.Sdk
Extension packages
Because functions that run in a .NET isolated process use different binding types, they require a unique set of
binding extension packages.
You'll find these extension packages under Microsoft.Azure.Functions.Worker.Extensions.
await host.RunAsync();
IMPORTANT
If your project targets .NET Framework 4.8, you also need to add FunctionsDebugger.Enable(); before creating the
HostBuilder. It should be the first line of your Main() method. See Debugging when targeting .NET Framework for more
information.
Configuration
The ConfigureFunctionsWorkerDefaults method is used to add the settings required for the function app to run
out-of-process, which includes the following functionality:
Default set of converters.
Set the default JsonSerializerOptions to ignore casing on property names.
Integrate with Azure Functions logging.
Output binding middleware and features.
Function execution middleware.
Default gRPC support.
.ConfigureFunctionsWorkerDefaults(builder =>
{
builder
.AddApplicationInsights()
.AddApplicationInsightsLogger();
})
Having access to the host builder pipeline means that you can also set any app-specific configurations during
initialization. You can call the ConfigureAppConfiguration method on HostBuilder one or more times to add the
configurations required by your function app. To learn more about app configuration, see Configuration in
ASP.NET Core.
These configurations apply to your function app running in a separate process. To make changes to the
functions host or trigger and binding configuration, you'll still need to use the host.json file.
Dependency injection
Dependency injection is simplified, compared to .NET class libraries. Rather than having to create a startup class
to register services, you just have to call ConfigureServices on the host builder and use the extension methods
on IServiceCollection to inject specific services.
The following example injects a singleton service dependency:
.ConfigureServices(s =>
{
s.AddSingleton<IHttpResponderService, DefaultHttpResponderService>();
})
This code requires using Microsoft.Extensions.DependencyInjection; . To learn more, see Dependency injection in
ASP.NET Core.
Middleware
.NET isolated also supports middleware registration, again by using a model similar to what exists in ASP.NET.
This model gives you the ability to inject logic into the invocation pipeline, and before and after functions
execute.
The ConfigureFunctionsWorkerDefaults extension method has an overload that lets you register your own
middleware, as you can see in the following example.
var host = new HostBuilder()
.ConfigureFunctionsWorkerDefaults(workerApplication =>
{
// Register our custom middlewares with the worker
workerApplication.UseMiddleware<ExceptionHandlingMiddleware>();
workerApplication.UseMiddleware<MyCustomMiddleware>();
workerApplication.UseWhen<StampHttpHeaderMiddleware>((context) =>
{
// We want to use this middleware only for http trigger invocations.
return context.FunctionDefinition.InputBindings.Values
.First(a => a.Type.EndsWith("Trigger")).Type == "httpTrigger";
});
})
.Build();
The UseWhen extension method can be used to register a middleware which gets executed conditionally. A
predicate which returns a boolean value needs to be passed to this method and the middleware will be
participating in the invocation processing pipeline if the return value of the predicate is true.
The following extension methods on FunctionContext make it easier to work with middleware in the isolated
model.
M ET H O D DESC RIP T IO N
GetOutputBindings Gets the output binding entries for the current function
execution. Each entry in the result of this method is of type
OutputBindingData . You can use the Value property to
get or set the value as needed.
The following is an example of a middleware implementation which reads the HttpRequestData instance and
updates the HttpResponseData instance during function execution. This middleware checks for the presence of a
specific request header(x-correlationId), and when present uses the header value to stamp a response header.
Otherwise, it generates a new GUID value and uses that for stamping the response header.
internal sealed class StampHttpHeaderMiddleware : IFunctionsWorkerMiddleware
{
public async Task Invoke(FunctionContext context, FunctionExecutionDelegate next)
{
var requestData = await context.GetHttpRequestDataAsync();
string correlationId;
if (requestData!.Headers.TryGetValues("x-correlationId", out var values))
{
correlationId = values.First();
}
else
{
correlationId = Guid.NewGuid().ToString();
}
await next(context);
context.GetHttpResponseData()?.Headers.Add("x-correlationId", correlationId);
}
}
For a more complete example of using custom middleware in your function app, see the custom middleware
reference sample.
Execution context
.NET isolated passes a FunctionContext object to your function methods. This object lets you get an ILogger
instance to write to the logs by calling the GetLogger method and supplying a categoryName string. To learn
more, see Logging.
Bindings
Bindings are defined by using attributes on methods, parameters, and return types. A function method is a
method with a Function attribute and a trigger attribute applied to an input parameter, as shown in the
following example:
[Function("QueueFunction")]
[QueueOutput("output-queue")]
public static string[] Run([QueueTrigger("input-queue")] Book myQueueItem,
FunctionContext context)
The trigger attribute specifies the trigger type and binds input data to a method parameter. The previous
example function is triggered by a queue message, and the queue message is passed to the method in the
myQueueItem parameter.
The Function attribute marks the method as a function entry point. The name must be unique within a project,
start with a letter and only contain letters, numbers, _ , and - , up to 127 characters in length. Project
templates often create a method named Run , but the method name can be any valid C# method name.
Because .NET isolated projects run in a separate worker process, bindings can't take advantage of rich binding
classes, such as ICollector<T> , IAsyncCollector<T> , and CloudBlockBlob . There's also no direct support for
types inherited from underlying service SDKs, such as DocumentClient and BrokeredMessage. Instead, bindings
rely on strings, arrays, and serializable types, such as plain old class objects (POCOs).
For HTTP triggers, you must use HttpRequestData and HttpResponseData to access the request and response
data. This is because you don't have access to the original HTTP request and response objects when running out-
of-process.
For a complete set of reference samples for using triggers and bindings when running out-of-process, see the
binding extensions reference sample.
Input bindings
A function can have zero or more input bindings that can pass data to a function. Like triggers, input bindings
are defined by applying a binding attribute to an input parameter. When the function executes, the runtime tries
to get data specified in the binding. The data being requested is often dependent on information provided by the
trigger using binding parameters.
Output bindings
To write to an output binding, you must apply an output binding attribute to the function method, which defined
how to write to the bound service. The value returned by the method is written to the output binding. For
example, the following example writes a string value to a message queue named myqueue-output by using an
output binding:
[Function("QueueFunction")]
[QueueOutput("output-queue")]
public static string[] Run([QueueTrigger("input-queue")] Book myQueueItem,
FunctionContext context)
{
// Use a string array to return more than one message.
string[] messages = {
$"Book name = {myQueueItem.Name}",
$"Book ID = {myQueueItem.Id}"};
var logger = context.GetLogger("QueueFunction");
logger.LogInformation($"{messages[0]},{messages[1]}");
The response from an HTTP trigger is always considered an output, so a return value attribute isn't required.
HTTP trigger
HTTP triggers translates the incoming HTTP request message into an HttpRequestData object that is passed to
the function. This object provides data from the request, including Headers , Cookies , Identities , URL , and
optional a message Body . This object is a representation of the HTTP request object and not the request itself.
Likewise, the function returns an HttpResponseData object, which provides data used to create the HTTP
response, including message StatusCode , Headers , and optionally a message Body .
The following code is an HTTP trigger
[Function("HttpFunction")]
public static HttpResponseData Run([HttpTrigger(AuthorizationLevel.Anonymous, "get", "post", Route = null)]
HttpRequestData req,
FunctionContext executionContext)
{
var logger = executionContext.GetLogger("HttpFunction");
logger.LogInformation("message logged");
return response;
}
Logging
In .NET isolated, you can write to logs by using an ILogger instance obtained from a FunctionContext object
passed to your function. Call the GetLogger method, passing a string value that is the name for the category in
which the logs are written. The category is usually the name of the specific function from which the logs are
written. To learn more about categories, see the monitoring article.
The following example shows how to get an ILogger and write logs inside a function:
Use various methods of ILogger to write various log levels, such as LogWarning or LogError . To learn more
about log levels, see the monitoring article.
An ILogger is also provided when using dependency injection.
using System;
using System.Diagnostics;
using Microsoft.Extensions.Hosting;
using Microsoft.Azure.Functions.Worker;
using NetFxWorker;
namespace MyDotnetFrameworkProject
{
internal class Program
{
static void Main(string[] args)
{
FunctionsDebugger.Enable();
host.Run();
}
}
}
Next, you need to manually attach to the process using a .NET Framework debugger. Visual Studio doesn't do
this automatically for isolated process .NET Framework apps yet, and the "Start Debugging" operation should be
avoided.
In your project directory (or its build output directory), run:
This will start your worker, and the process will stop with the following message:
Azure Functions .NET Worker (PID: <process id>) initialized in debug mode. Waiting for debugger to attach...
Where <process id> is the ID for your worker process. You can now use Visual Studio to manually attach to the
process. For instructions on this operation, see How to attach to a running process.
Once the debugger is attached, the process execution will resume and you will be able to debug.
Output binding types IAsyncCollector , DocumentClient, Simple types, JSON serializable types,
BrokeredMessage, and other client- and arrays.
specific types
Next steps
Learn more about triggers and bindings
Learn more about best practices for Azure Functions
Azure Functions C# script (.csx) developer reference
8/2/2022 • 12 minutes to read • Edit Online
Folder structure
The folder structure for a C# script project looks like the following:
FunctionsProject
| - MyFirstFunction
| | - run.csx
| | - function.json
| | - function.proj
| - MySecondFunction
| | - run.csx
| | - function.json
| | - function.proj
| - host.json
| - extensions.csproj
| - bin
There's a shared host.json file that can be used to configure the function app. Each function has its own code file
(.csx) and binding configuration file (function.json).
The binding extensions required in version 2.x and later versions of the Functions runtime are defined in the
extensions.csproj file, with the actual library files in the bin folder. When developing locally, you must register
binding extensions. When developing functions in the Azure portal, this registration is done for you.
Binding to arguments
Input or output data is bound to a C# script function parameter via the name property in the function.json
configuration file. The following example shows a function.json file and run.csx file for a queue-triggered
function. The parameter that receives data from the queue message is named myQueueItem because that's the
value of the name property.
{
"disabled": false,
"bindings": [
{
"type": "queueTrigger",
"direction": "in",
"name": "myQueueItem",
"queueName": "myqueue-items",
"connection":"MyStorageConnectionAppSetting"
}
]
}
#r "Microsoft.WindowsAzure.Storage"
using Microsoft.Extensions.Logging;
using Microsoft.WindowsAzure.Storage.Queue;
using System;
TIP
If you plan to use the HTTP or WebHook bindings, plan to avoid port exhaustion that can be caused by improper
instantiation of HttpClient . For more information, see How to manage connections in Azure Functions.
A POCO class must have a getter and setter defined for each property.
#load "mylogger.csx"
using Microsoft.Extensions.Logging;
Example mylogger.csx:
Using a shared .csx file is a common pattern when you want to strongly type the data passed between functions
by using a POCO object. In the following simplified example, an HTTP trigger and queue trigger share a POCO
object named Order to strongly type the order data:
Example run.csx for HTTP trigger:
#load "..\shared\order.csx"
using System.Net;
using Microsoft.Extensions.Logging;
if (req.orderId == null)
{
return new HttpResponseMessage(HttpStatusCode.BadRequest);
}
else
{
await outputQueueItem.AddAsync(req);
return new HttpResponseMessage(HttpStatusCode.OK);
}
}
#load "..\shared\order.csx"
using System;
using Microsoft.Extensions.Logging;
public static void Run(Order myQueueItem, out Order outputQueueItem, ILogger log)
{
log.LogInformation($"C# Queue trigger function processed order...");
log.LogInformation(myQueueItem.ToString());
outputQueueItem = myQueueItem;
}
Example order.csx:
public class Order
{
public string orderId {get; set; }
public string custName {get; set;}
public string custAddress {get; set;}
public string custEmail {get; set;}
public string cartId {get; set; }
Logging
To log output to your streaming logs in C#, include an argument of type ILogger. We recommend that you name
it log . Avoid using Console.Write in Azure Functions.
logger.LogMetric("TestMetric", 1234);
This code is an alternative to calling TrackMetric by using the Application Insights API for .NET.
Async
To make a function asynchronous, use the async keyword and return a Task object.
You can't use out parameters in async functions. For output bindings, use the function return value or a
collector object instead.
Cancellation tokens
A function can accept a CancellationToken parameter, which enables the operating system to notify your code
when the function is about to be terminated. You can use this notification to make sure the function doesn't
terminate unexpectedly in a way that leaves data in an inconsistent state.
The following example shows how to check for impending function termination.
using System;
using System.IO;
using System.Threading;
Importing namespaces
If you need to import namespaces, you can do so as usual, with the using clause.
using System.Net;
using System.Threading.Tasks;
using Microsoft.Extensions.Logging;
The following namespaces are automatically imported and are therefore optional:
System
System.Collections.Generic
System.IO
System.Linq
System.Net.Http
System.Threading.Tasks
Microsoft.Azure.WebJobs
Microsoft.Azure.WebJobs.Host
#r "System.Web.Http"
using System.Net;
using System.Net.Http;
using System.Threading.Tasks;
using Microsoft.Extensions.Logging;
The following assemblies are automatically added by the Azure Functions hosting environment:
mscorlib
System
System.Core
System.Xml
System.Net.Http
Microsoft.Azure.WebJobs
Microsoft.Azure.WebJobs.Host
Microsoft.Azure.WebJobs.Extensions
System.Web.Http
System.Net.Http.Formatting
For information on how to upload files to your function folder, see the section on package management.
Watched directories
The directory that contains the function script file is automatically watched for changes to assemblies. To watch
for assembly changes in other directories, add them to the watchDirectories list in host.json.
By default, the supported set of Functions extension NuGet packages are made available to your C# script
function app by using extension bundles. To learn more, see Extension bundles.
If for some reason you can't use extension bundles in your project, you can also use the Azure Functions Core
Tools to install extensions based on bindings defined in the function.json files in your app. When using Core
Tools to register extensions, make sure to use the --csx option. To learn more, see Install extensions.
By default, Core Tools reads the function.json files and adds the required packages to an extensions.csproj C#
class library project file in the root of the function app's file system (wwwroot). Because Core Tools uses
dotnet.exe, you can use it to add any NuGet package reference to this extensions file. During installation, Core
Tools builds the extensions.csproj to install the required libraries. Here is an example extensions.csproj file that
adds a reference to Microsoft.ProjectOxford.Face version 1.1.0:
<Project Sdk="Microsoft.NET.Sdk">
<PropertyGroup>
<TargetFramework>netstandard2.0</TargetFramework>
</PropertyGroup>
<ItemGroup>
<PackageReference Include="Microsoft.ProjectOxford.Face" Version="1.1.0" />
</ItemGroup>
</Project>
To use a custom NuGet feed, specify the feed in a Nuget.Config file in the function app root folder. For more
information, see Configuring NuGet behavior.
If you are working on your project only in the portal, you'll need to manually create the extensions.csproj file or
a Nuget.Config file directly in the site. To learn more, see Manually install extensions.
Environment variables
To get an environment variable or an app setting value, use System.Environment.GetEnvironmentVariable , as
shown in the following code example:
public static void Run(TimerInfo myTimer, ILogger log)
{
log.LogInformation($"C# Timer trigger function executed at: {DateTime.Now}");
log.LogInformation(GetEnvironmentVariable("AzureWebJobsStorage"));
log.LogInformation(GetEnvironmentVariable("WEBSITE_SITE_NAME"));
}
Binding at runtime
In C# and other .NET languages, you can use an imperative binding pattern, as opposed to the declarative
bindings in function.json. Imperative binding is useful when binding parameters need to be computed at
runtime rather than design time. With this pattern, you can bind to supported input and output bindings on-the-
fly in your function code.
Define an imperative binding as follows:
Do not include an entry in function.json for your desired imperative bindings.
Pass in an input parameter Binder binder or IBinder binder .
Use the following C# pattern to perform the data binding.
BindingTypeAttribute is the .NET attribute that defines your binding and T is an input or output type that's
supported by that binding type. T cannot be an out parameter type (such as out JObject ). For example, the
Mobile Apps table output binding supports six output types, but you can only use ICollector<T> or
IAsyncCollector<T> for T .
using Microsoft.Azure.WebJobs;
using Microsoft.Azure.WebJobs.Host.Bindings.Runtime;
BlobAttribute defines the Storage blob input or output binding, and TextWriter is a supported output binding
type.
Multiple attribute example
The preceding example gets the app setting for the function app's main Storage account connection string
(which is AzureWebJobsStorage ). You can specify a custom app setting to use for the Storage account by adding
the StorageAccountAttribute and passing the attribute array into BindAsync<T>() . Use a Binder parameter, not
IBinder . For example:
using Microsoft.Azure.WebJobs;
using Microsoft.Azure.WebJobs.Host.Bindings.Runtime;
The following table lists the .NET attributes for each binding type and the packages in which they are defined.
Cosmos DB Microsoft.Azure.WebJobs.DocumentDBAttribute
#r
"Microsoft.Azure.WebJobs.Extensions.CosmosDB"
Twilio Microsoft.Azure.WebJobs.TwilioSmsAttribute
#r
"Microsoft.Azure.WebJobs.Extensions.Twilio"
Next steps
Learn more about triggers and bindings
Learn more about best practices for Azure Functions
Azure Functions F# Developer Reference
8/2/2022 • 7 minutes to read • Edit Online
F# for Azure Functions is a solution for easily running small pieces of code, or "functions," in the cloud. Data
flows into your F# function via function arguments. Argument names are specified in function.json , and there
are predefined names for accessing things like the function logger and cancellation tokens.
IMPORTANT
F# script (.fsx) is only supported by version 1.x of the Azure Functions runtime. If you want to use F# with version 2.x and
later versions of the runtime, you must use a precompiled F# class library project (.fs). You create, manage, and publish an
F# class library project using Visual Studio as you would a C# class library project. For more information about Functions
versions, see Azure Functions runtime versions overview.
This article assumes that you've already read the Azure Functions developer reference.
Folder structure
The folder structure for an F# script project looks like the following:
FunctionsProject
| - MyFirstFunction
| | - run.fsx
| | - function.json
| | - function.proj
| - MySecondFunction
| | - run.fsx
| | - function.json
| | - function.proj
| - host.json
| - extensions.csproj
| - bin
There's a shared host.json file that can be used to configure the function app. Each function has its own code file
(.fsx) and binding configuration file (function.json).
The binding extensions required in version 2.x and later versions of the Functions runtime are defined in the
extensions.csproj file, with the actual library files in the bin folder. When developing locally, you must register
binding extensions. When developing functions in the Azure portal, this registration is done for you.
Binding to arguments
Each binding supports some set of arguments, as detailed in the Azure Functions triggers and bindings
developer reference. For example, one of the argument bindings a blob trigger supports is a POCO, which can
be expressed using an F# record. For example:
Your F# Azure Function will take one or more arguments. When we talk about Azure Functions arguments, we
refer to input arguments and output arguments. An input argument is exactly what it sounds like: input to your
F# Azure Function. An output argument is mutable data or a byref<> argument that serves as a way to pass
data back out of your function.
In the example above, blob is an input argument, and output is an output argument. Notice that we used
byref<> for output (there's no need to add the [<Out>] annotation). Using a byref<> type allows your
function to change which record or object the argument refers to.
When an F# record is used as an input type, the record definition must be marked with [<CLIMutable>] in order
to allow the Azure Functions framework to set the fields appropriately before passing the record to your
function. Under the hood, [<CLIMutable>] generates setters for the record properties. For example:
[<CLIMutable>]
type TestObject =
{ SenderName : string
Greeting : string }
An F# class can also be used for both in and out arguments. For a class, properties will usually need getters and
setters. For example:
type Item() =
member val Id = "" with get,set
member val Text = "" with get,set
Logging
To log output to your streaming logs in F#, your function should take an argument of type ILogger. For
consistency, we recommend this argument is named log . For example:
Async
The asyncworkflow can be used, but the result needs to return a Task . This can be done with
Async.StartAsTask , for example:
let Run(req: HttpRequestMessage) =
async {
return new HttpResponseMessage(HttpStatusCode.OK)
} |> Async.StartAsTask
Cancellation Token
If your function needs to handle shutdown gracefully, you can give it a CancellationToken argument. This can be
combined with async , for example:
Importing namespaces
Namespaces can be opened in the usual way:
open System.Net
open System.Threading.Tasks
open Microsoft.Extensions.Logging
#r "System.Web.Http"
open System.Net
open System.Net.Http
open System.Threading.Tasks
open Microsoft.Extensions.Logging
The following assemblies are automatically added by the Azure Functions hosting environment:
mscorlib ,
System
System.Core
System.Xml
System.Net.Http
Microsoft.Azure.WebJobs
Microsoft.Azure.WebJobs.Host
Microsoft.Azure.WebJobs.Extensions
System.Web.Http
System.Net.Http.Formatting .
In addition, the following assemblies are special cased and may be referenced by simplename (e.g.
#r "AssemblyName" ):
Newtonsoft.Json
Microsoft.WindowsAzure.Storage
Microsoft.ServiceBus
Microsoft.AspNet.WebHooks.Receivers
Microsoft.AspNEt.WebHooks.Common .
If you need to reference a private assembly, you can upload the assembly file into a bin folder relative to your
function and reference it by using the file name (e.g. #r "MyAssembly.dll" ). For information on how to upload
files to your function folder, see the following section on package management.
Editor Prelude
An editor that supports F# Compiler Services will not be aware of the namespaces and assemblies that Azure
Functions automatically includes. As such, it can be useful to include a prelude that helps the editor find the
assemblies you are using, and to explicitly open namespaces. For example:
#if !COMPILED
#I "../../bin/Binaries/WebJobs.Script.Host"
#r "Microsoft.Azure.WebJobs.Host.dll"
#endif
open System
open Microsoft.Azure.WebJobs.Host
open Microsoft.Extensions.Logging
When Azure Functions executes your code, it processes the source with COMPILED defined, so the editor prelude
will be ignored.
Package management
To use NuGet packages in an F# function, add a project.json file to the function's folder in the function app's
file system. Here is an example project.json file that adds a NuGet package reference to
Microsoft.ProjectOxford.Face version 1.1.0:
{
"frameworks": {
"net46":{
"dependencies": {
"Microsoft.ProjectOxford.Face": "1.1.0"
}
}
}
}
Only the .NET Framework 4.6 is supported, so make sure that your project.json file specifies net46 as shown
here.
When you upload a project.json file, the runtime gets the packages and automatically adds references to the
package assemblies. You don't need to add #r "AssemblyName" directives. Just add the required open
statements to your .fsx file.
You may wish to put automatically references assemblies in your editor prelude, to improve your editor's
interaction with F# Compile Services.
How to add a project.json file to your Azure Function
1. Begin by making sure your function app is running, which you can do by opening your function in the Azure
portal. This also gives access to the streaming logs where package installation output will be displayed.
2. To upload a project.json file, use one of the methods described in how to update function app files. If you
are using Continuous Deployment for Azure Functions, you can add a project.json file to your staging
branch in order to experiment with it before adding it to your deployment branch.
3. After the project.json file is added, you will see output similar to the following example in your function's
streaming log:
Environment variables
To get an environment variable or an app setting value, use System.Environment.GetEnvironmentVariable , for
example:
open System.Environment
open Microsoft.Extensions.Logging
#load "logger.fsx"
logger.fsx
Paths provides to the #load directive are relative to the location of your .fsx file.
#load "logger.fsx" loads a file located in the function folder.
#load "package\logger.fsx" loads a file located in the package folder in the function folder.
#load "..\shared\mylogger.fsx" loads a file located in the shared folder at the same level as the function
folder, that is, directly under wwwroot .
The #load directive only works with .fsx (F# script) files, and not with .fs files.
Next steps
For more information, see the following resources:
F# Guide
Best Practices for Azure Functions
Azure Functions developer reference
Azure Functions triggers and bindings
Azure Functions testing
Azure Functions scaling
Azure Functions JavaScript developer guide
8/2/2022 • 26 minutes to read • Edit Online
This guide contains detailed information to help you succeed developing Azure Functions using JavaScript.
As an Express.js, Node.js, or JavaScript developer, if you're new to Azure Functions, please consider first reading
one of the following articles:
Folder structure
The required folder structure for a JavaScript project looks like the following. This default can be changed. For
more information, see the scriptFile section below.
FunctionsProject
| - MyFirstFunction
| | - index.js
| | - function.json
| - MySecondFunction
| | - index.js
| | - function.json
| - SharedCode
| | - myFirstHelperFunction.js
| | - mySecondHelperFunction.js
| - node_modules
| - host.json
| - package.json
| - extensions.csproj
At the root of the project, there's a shared host.json file that can be used to configure the function app. Each
function has a folder with its own code file (.js) and binding configuration file (function.json). The name of
function.json 's parent directory is always the name of your function.
The binding extensions required in version 2.x of the Functions runtime are defined in the extensions.csproj
file, with the actual library files in the bin folder. When developing locally, you must register binding
extensions. When developing functions in the Azure portal, this registration is done for you.
Exporting a function
JavaScript functions must be exported via module.exports (or exports ). Your exported function should be a
JavaScript function that executes when triggered.
By default, the Functions runtime looks for your function in index.js , where index.js shares the same parent
directory as its corresponding function.json . In the default case, your exported function should be the only
export from its file or the export named run or index . To configure the file location and export name of your
function, read about configuring your function's entry point below.
Your exported function is passed a number of arguments on execution. The first argument it takes is always a
context object.
2.x+
1.x
When using the async function declaration or plain JavaScript Promises in version 2.x, 3.x, or 4.x of the
Functions runtime, you don't need to explicitly call the context.done callback to signal that your function has
completed. Your function completes when the exported async function/Promise completes.
The following example is a simple function that logs that it was triggered and immediately completes execution.
When exporting an async function, you can also configure an output binding to take the return value. This is
recommended if you only have one output binding.
Returning from the function
To assign an output using return , change the name property to $return in function.json .
{
"type": "http",
"direction": "out",
"name": "$return"
}
In this case, your function should look like the following example:
Bindings
In JavaScript, bindings are configured and defined in a function's function.json. Functions interact with bindings
a number of ways.
Inputs
Input are divided into two categories in Azure Functions: one is the trigger input and the other is the additional
input. Trigger and other input bindings (bindings of direction === "in" ) can be read by a function in three
ways:
[Recommended] As parameters passed to your function. They're passed to the function in the
same order that they're defined in function.json. The name property defined in function.json doesn't need
to match the name of your parameter, although it should.
As members of the context.bindings object. Each member is named by the name property defined
in function.json.
Outputs
Outputs (bindings of direction === "out" ) can be written to by a function in a number of ways. In all cases, the
name property of the binding as defined in function.json corresponds to the name of the object member written
to in your function.
You can assign data to output bindings in one of the following ways (don't combine these methods):
[Recommended for multiple outputs] Returning an object. If you are using an async/Promise
returning function, you can return an object with assigned output data. In the example below, the output
bindings are named "httpResponse" and "queueOutput" in function.json.
[Recommended for single output] Returning a value directly and using the $return binding
name. This only works for async/Promise returning functions. See example in exporting an async
function.
Assigning values to context.bindings You can assign values directly to context.bindings.
context object
The runtime uses a context object to pass data to and from your function and the runtime. Used to read and
set data from bindings and for writing to logs, the context object is always the first parameter passed to a
function.
The context passed into your function exposes an executionContext property, which is an object with the
following properties:
context.bindings property
context.bindings
Returns a named object that is used to read or assign binding data. Input and trigger binding data can be
accessed by reading properties on context.bindings . Output binding data can be assigned by adding data to
context.bindings
For example, the following binding definitions in your function.json let you access the contents of a queue from
context.bindings.myInput and assign outputs to a queue using context.bindings.myOutput .
{
"type":"queue",
"direction":"in",
"name":"myInput"
...
},
{
"type":"queue",
"direction":"out",
"name":"myOutput"
...
}
// myInput contains the input data, which may have properties such as "name"
var author = context.bindings.myInput.name;
// Similarly, you can set your output data
context.bindings.myOutput = {
some_text: 'hello world',
a_number: 1 };
In a synchronous function, you can choose to define output binding data using the context.done method
instead of the context.binding object (see below).
context.bindingData property
context.bindingData
Returns a named object that contains trigger metadata and function invocation data ( invocationId ,
sys.methodName , sys.utcNow , sys.randGuid ). For an example of trigger metadata, see this event hubs example.
context.done method
2.x
1.x
In 2.x, 3.x, and 4.x, the function should be marked as async even if there's no awaited function call inside the
function, and the function doesn't need to call context.done to indicate the end of the function.
context.log method
context.log(message)
Allows you to write to the streaming function logs at the default trace level, with other logging levels available.
Trace logging is described in detail in the next section.
All context.log methods support the same parameter format that's supported by the Node.js util.format
method. Consider the following code, which writes function logs by using the default trace level:
You can also write the same code in the following format:
NOTE
Don't use console.log to write trace outputs. Because output from console.log is captured at the function app level,
it's not tied to a specific function invocation and isn't displayed in a specific function's logs. Also, version 1.x of the
Functions runtime doesn't support using console.log to write to the console.
Trace levels
In addition to the default level, the following logging methods are available that let you write function logs at
specific trace levels.
M ET H O D DESC RIP T IO N
The following example writes the same log at the warning trace level, instead of the info level:
Because error is the highest trace level, this trace is written to the output at all trace levels as long as logging is
enabled.
Configure the trace level for logging
Functions lets you define the threshold trace level for writing to the logs or the console. The specific threshold
settings depend on your version of the Functions runtime.
2.x+
2.x+
1.x
To set the threshold for traces written to the logs, use the logging.logLevel property in the host.json file. This
JSON object lets you define a default threshold for all functions in your function app, plus you can define
specific thresholds for individual functions. To learn more, see How to configure monitoring for Azure Functions.
// Use this with 'tagOverrides' to correlate custom telemetry to the parent function invocation.
var operationIdOverride = {"ai.operation.id":context.traceContext.traceparent};
The tagOverrides parameter sets the operation_Id to the function's invocation ID. This setting enables you to
correlate all of the automatically generated and custom telemetry for a given function invocation.
Response object
The context.res (response) object has the following properties:
cookies An array of HTTP cookie objects that are set in the response.
An HTTP cookie object has a name , value , and other
cookie properties, such as maxAge or sameSite .
// You can access your HTTP request off the context ...
if(context.req.body.emoji === ':pizza:') context.log('Yay!');
// and also set your HTTP response
context.res = { status: 202, body: 'You successfully ordered more coffee!' };
From the named input and output bindings. In this way, the HTTP trigger and bindings work the
same as any other binding. The following example sets the response object by using a named response
binding:
{
"type": "http",
"direction": "out",
"name": "response"
}
[Response only] By returning the response. A special binding name of $return allows you to
assign the function's return value to the output binding. The following HTTP output binding defines a
$return output parameter:
{
"type": "http",
"direction": "out",
"name": "$return"
}
2.x+
1.x
Node version
The following table shows current supported Node.js versions for each major version of the Functions runtime,
by operating system:
You can see the current version that the runtime is using by logging process.version from any function.
Setting the Node version
Windows
Linux
For Windows function apps, target the version in Azure by setting the WEBSITE_NODE_DEFAULT_VERSION app setting
to a supported LTS version, such as ~16 .
To learn more about Azure Functions runtime support policy, please refer to this article.
Dependency management
In order to use community libraries in your JavaScript code, as is shown in the below example, you need to
ensure that all dependencies are installed on your Function App in Azure.
NOTE
You should define a package.json file at the root of your Function App. Defining the file lets all functions in the app
share the same cached packages, which gives the best performance. If a version conflict arises, you can resolve it by
adding a package.json file in the folder of a specific function.
When deploying Function Apps from source control, any package.json file present in your repo, will trigger an
npm install in its folder during deployment. But when deploying via the Portal or CLI, you'll have to manually
install the packages.
There are two ways to install packages on your Function App:
Deploying with Dependencies
1. Install all requisite packages locally by running npm install .
2. Deploy your code, and ensure that the node_modules folder is included in the deployment.
Using Kudu (Windows only)
1. Go to https://<function_app_name>.scm.azurewebsites.net .
2. Select Debug Console > CMD .
3. Go to D:\home\site\wwwroot , and then drag your package.json file to the wwwroot folder at the top half
of the page.
You can upload files to your function app in other ways also. For more information, see How to update
function app files.
4. After the package.json file is uploaded, run the npm install command in the Kudu remote execution
console .
This action downloads the packages indicated in the package.json file and restarts the function app.
Environment variables
Add your own environment variables to a function app, in both your local and cloud environments, such as
operational secrets (connection strings, keys, and endpoints) or environmental settings (such as profiling
variables). Access these settings using process.env in your function code.
In local development environment
When running locally, your functions project includes a local.settings.json file, where you store your
environment variables in the Values object.
{
"IsEncrypted": false,
"Values": {
"AzureWebJobsStorage": "",
"FUNCTIONS_WORKER_RUNTIME": "node",
"translatorTextEndPoint": "https://api.cognitive.microsofttranslator.com/",
"translatorTextKey": "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx",
"languageWorkers__node__arguments": "--prof"
}
}
ECMAScript modules (ES modules) are the new official standard module system for Node.js. So far, the code
samples in this article use the CommonJS syntax. When running Azure Functions in Node.js 14 or higher, you
can choose to write your functions using ES modules syntax.
To use ES modules in a function, change its filename to use a .mjs extension. The following index.mjs file
example is an HTTP triggered function that uses ES modules syntax to import the uuid library and return a
value.
By default, a JavaScript function is executed from index.js , a file that shares the same parent directory as its
corresponding function.json .
scriptFile can be used to get a folder structure that looks like the following example:
FunctionApp
| - host.json
| - myNodeFunction
| | - function.json
| - lib
| | - sayHello.js
| - node_modules
| | - ... packages ...
| - package.json
The function.json for myNodeFunction should include a scriptFile property pointing to the file with the
exported function to run.
{
"scriptFile": "../lib/sayHello.js",
"bindings": [
...
]
}
Using entryPoint
In scriptFile (or index.js ), a function must be exported using module.exports in order to be found and run.
By default, the function that executes when triggered is the only export from that file, the export named run , or
the export named index .
This can be configured using entryPoint in function.json , as in the following example:
{
"entryPoint": "logFoo",
"bindings": [
...
]
}
In Functions v2.x or higher, which supports the this parameter in user functions, the function code could then
be as in the following example:
class MyObj {
constructor() {
this.foo = 1;
};
async logFoo(context) {
context.log("Foo is " + this.foo);
}
}
In this example, it's important to note that although an object is being exported, there are no guarantees for
preserving state between executions.
Local debugging
When started with the --inspect parameter, a Node.js process listens for a debugging client on the specified
port. In Azure Functions 2.x or higher, you can specify arguments to pass into the Node.js process that runs your
code by adding the environment variable or App Setting languageWorkers:node:arguments = <args> .
To debug locally, add "languageWorkers:node:arguments": "--inspect=5858" under Values in your
local.settings.json file and attach a debugger to port 5858.
When debugging using VS Code, the --inspect parameter is automatically added using the port value in the
project's launch.json file.
In version 1.x, setting languageWorkers:node:arguments won't work. The debug port can be selected with the
--nodeDebugPort parameter on Azure Functions Core Tools.
NOTE
You can only configure languageWorkers:node:arguments when running the function app locally.
Testing
Testing your functions includes:
HTTP end-to-end : To test a function from its HTTP endpoint, you can use any tool that can make an
HTTP request such as cURL, Postman, or JavaScript's fetch method.
Integration testing : Integration test includes the function app layer. This testing means you need to
control the parameters into the function including the request and the context. The context is unique to
each kind of trigger and means you need to know the incoming and outgoing bindings for that trigger
type.
Learn more about integration testing and mocking the context layer with an experimental GitHub repo,
https://github.com/anthonychu/azure-functions-test-utils.
Unit testing : Unit testing is performed within the function app. You can use any tool that can test
JavaScript, such as Jest or Mocha.
TypeScript
When you target version 2.x or higher of the Functions runtime, both Azure Functions for Visual Studio Code
and the Azure Functions Core Tools let you create function apps using a template that supports TypeScript
function app projects. The template generates package.json and tsconfig.json project files that make it easier
to transpile, run, and publish JavaScript functions from TypeScript code with these tools.
A generated .funcignore file is used to indicate which files are excluded when a project is published to Azure.
TypeScript files (.ts) are transpiled into JavaScript files (.js) in the dist output directory. TypeScript templates
use the scriptFile parameter in function.json to indicate the location of the corresponding .js file in the
dist folder. The output location is set by the template by using outDir parameter in the tsconfig.json file. If
you change this setting or the name of the folder, the runtime isn't able to find the code to run.
The way that you locally develop and deploy from a TypeScript project depends on your development tool.
Visual Studio Code
The Azure Functions for Visual Studio Code extension lets you develop your functions using TypeScript. The
Core Tools is a requirement of the Azure Functions extension.
To create a TypeScript function app in Visual Studio Code, choose TypeScript as your language when you create
a function app.
When you press F5 to run the app locally, transpilation is done before the host (func.exe) is initialized.
When you deploy your function app to Azure using the Deploy to function app... button, the Azure Functions
extension first generates a production-ready build of JavaScript files from the TypeScript source files.
Azure Functions Core Tools
There are several ways in which a TypeScript project differs from a JavaScript project when using the Core Tools.
Create project
To create a TypeScript function app project using Core Tools, you must specify the TypeScript language option
when you create your function app. You can do this in one of the following ways:
Run the func init command, select node as your language stack, and then select typescript .
Run the func init --worker-runtime typescript command.
Run local
To run your function app code locally using Core Tools, use the following commands instead of func host start
:
npm install
npm start
Publish to Azure
Before you use the func azure functionapp publish command to deploy to Azure, you create a production-
ready build of JavaScript files from the TypeScript source files.
The following commands prepare and publish your TypeScript project using Core Tools:
In this command, replace <APP_NAME> with the name of your function app.
When writing Azure Functions in JavaScript, you should write code using the async and await keywords.
Writing code using async and await instead of callbacks or .then and .catch with Promises helps avoid two
common problems:
Throwing uncaught exceptions that crash the Node.js process, potentially affecting the execution of other
functions.
Unexpected behavior, such as missing logs from context.log, caused by asynchronous calls that aren't
properly awaited.
In the example below, the asynchronous method fs.readFile is invoked with an error-first callback function as
its second parameter. This code causes both of the issues mentioned above. An exception that isn't explicitly
caught in the correct scope crashed the entire process (issue #1). Calling the 1.x context.done() outside of the
scope of the callback function means that the function invocation may end before the file is read (issue #2). In
this example, calling 1.x context.done() too early results in missing log entries starting with Data from file: .
// NOT RECOMMENDED PATTERN
const fs = require('fs');
Using the async and await keywords helps avoid both of these errors. You should use the Node.js utility
function util.promisify to turn error-first callback-style functions into awaitable functions.
In the example below, any unhandled exceptions thrown during the function execution only fail the individual
invocation that raised an exception. The await keyword means that steps following readFileAsync only execute
after readFile is complete. With async and await , you also don't need to call the context.done() callback.
// Recommended pattern
const fs = require('fs');
const util = require('util');
const readFileAsync = util.promisify(fs.readFile);
Next steps
For more information, see the following resources:
Best practices for Azure Functions
Azure Functions developer reference
Azure Functions triggers and bindings
Azure Functions Java developer guide
8/2/2022 • 13 minutes to read • Edit Online
This guide contains detailed information to help you succeed developing Azure Functions using Java.
As a Java developer, if you're new to Azure Functions, please consider first reading one of the following articles:
Java function using Visual Developer guide Java samples with different
Studio Code Hosting options triggers
Java/Maven function with Performance considerations Event Hub trigger and Cosmos
terminal/command prompt DB output binding
Java function using Gradle
Java function using Eclipse
Java function using IntelliJ IDEA
Programming model
The concepts of triggers and bindings are fundamental to Azure Functions. Triggers start the execution of your
code. Bindings give you a way to pass data to and return data from a function, without having to write custom
data access code.
mvn archetype:generate \
-DarchetypeGroupId=com.microsoft.azure \
-DarchetypeArtifactId=azure-functions-archetype
Folder structure
Here is the folder structure of an Azure Functions Java project:
FunctionsProject
| - src
| | - main
| | | - java
| | | | - FunctionApp
| | | | | - MyFirstFunction.java
| | | | | - MySecondFunction.java
| - target
| | - azure-functions
| | | - FunctionApp
| | | | - FunctionApp.jar
| | | | - host.json
| | | | - MyFirstFunction
| | | | | - function.json
| | | | - MySecondFunction
| | | | | - function.json
| | | | - bin
| | | | - lib
| - pom.xml
You can use a shared host.json file to configure the function app. Each function has its own code file (.java) and
binding configuration file (function.json).
You can put more than one function in a project. Avoid putting your functions into separate jars. The
FunctionApp in the target directory is what gets deployed to your function app in Azure.
IMPORTANT
You must configure an Azure Storage account in your local.settings.json to run Azure Blob storage, Azure Queue storage,
or Azure Table storage triggers locally.
Example:
public class Function {
public String echo(@HttpTrigger(name = "req",
methods = {HttpMethod.POST}, authLevel = AuthorizationLevel.ANONYMOUS)
String req, ExecutionContext context) {
return String.format(req);
}
}
{
"scriptFile": "azure-functions-example.jar",
"entryPoint": "com.example.Function.echo",
"bindings": [
{
"type": "httpTrigger",
"name": "req",
"direction": "in",
"authLevel": "anonymous",
"methods": [ "GET","POST" ]
},
{
"type": "http",
"name": "$return",
"direction": "out"
}
]
}
Java versions
The version of Java used when creating the function app on which functions runs in Azure is specified in the
pom.xml file. The Maven archetype currently generates a pom.xml for Java 8, which you can change before
publishing. The Java version in pom.xml should match the version on which you have locally developed and
tested your app.
Supported versions
The following table shows current supported Java versions for each major version of the Functions runtime, by
operating system:
4.x 11 11
8 8
3.x 11 11
8 8
2.x 8 n/a
Unless you specify a Java version for your deployment, the Maven archetype defaults to Java 8 during
deployment to Azure.
Specify the deployment version
You can control the version of Java targeted by the Maven archetype by using the -DjavaVersion parameter. The
value of this parameter can be either 8 or 11 .
The Maven archetype generates a pom.xml that targets the specified Java version. The following elements in
pom.xml indicate the Java version to use:
The following examples show the settings for Java 8 in the relevant sections of the pom.xml file:
Java.version
<properties>
<project.build.sourceEncoding>UTF-8</project.build.sourceEncoding>
<java.version>1.8</java.version>
<azure.functions.maven.plugin.version>1.6.0</azure.functions.maven.plugin.version>
<azure.functions.java.library.version>1.3.1</azure.functions.java.library.version>
<functionAppName>fabrikam-functions-20200718015742191</functionAppName>
<stagingDirectory>${project.build.directory}/azure-functions/${functionAppName}</stagingDirectory>
</properties>
JavaVersion
<runtime>
<!-- runtime os, could be windows, linux or docker-->
<os>windows</os>
<javaVersion>8</javaVersion>
<!-- for docker function, please set the following parameters -->
<!--  -->
<!-- <serverId></serverId> -->
<!-- <registryUrl></registryUrl> -->
</runtime>
IMPORTANT
You must have the JAVA_HOME environment variable set correctly to the JDK directory that is used during code
compiling using Maven. Make sure that the version of the JDK is at least as high as the Java.version setting.
EL EM EN T W IN DO W S L IN UX DO C K ER
The following example shows the operating system setting in the runtime section of the pom.xml file:
<runtime>
<!-- runtime os, could be windows, linux or docker-->
<os>windows</os>
<javaVersion>8</javaVersion>
<!-- for docker function, please set the following parameters -->
<!--  -->
<!-- <serverId></serverId> -->
<!-- <registryUrl></registryUrl> -->
</runtime>
JAVA VERSIO N L IN UX W IN DO W S
For local development or testing, you can download the Microsoft build of OpenJDK or Adoptium Temurin
binaries for free. Azure support for issues with the JDKs and function apps is available with a qualified support
plan.
If you would like to continue using the Zulu for Azure binaries on your Function app, please configure your app
accordingly. You can continue to use the Azul binaries for your site, but any security patches or improvements
will only be available in new versions of the OpenJDK, so we recommend that you eventually remove this
configuration so that your Function apps use the latest available version of Java.
Customize JVM
Functions lets you customize the Java virtual machine (JVM) used to run your Java functions. The following JVM
options are used by default:
-XX:+TieredCompilation
-XX:TieredStopAtLevel=1
-noverify
-Djava.net.preferIPv4Stack=true
-jar
You can provide additional arguments in an app setting named JAVA_OPTS . You can add app settings to your
function app deployed to Azure in the Azure portal or the Azure CLI.
IMPORTANT
In the Consumption plan, you must also add the WEBSITE_USE_PLACEHOLDER setting with a value of 0 for the
customization to work. This setting does increase the cold start times for Java functions.
Azure portal
In the Azure portal, use the Application Settings tab to add the JAVA_OPTS setting.
Azure CLI
You can use the az functionapp config appsettings set command to set JAVA_OPTS , as in the following example:
Consumption plan
Dedicated plan / Premium plan
This example enables headless mode. Replace <APP_NAME> with the name of your function app, and
<RESOURCE_GROUP> with the resource group.
Third-party libraries
Azure Functions supports the use of third-party libraries. By default, all dependencies specified in your project
pom.xml file are automatically bundled during the mvn package goal. For libraries not specified as dependencies
in the pom.xml file, place them in a lib directory in the function's root directory. Dependencies placed in the
lib directory are added to the system class loader at runtime.
@FunctionName("BlobTrigger")
@StorageAccount("AzureWebJobsStorage")
public void blobTrigger(
@BlobTrigger(name = "content", path = "myblob/{fileName}", dataType = "binary") byte[] content,
@BindingName("fileName") String fileName,
final ExecutionContext context
) {
context.getLogger().info("Java Blob trigger function processed a blob.\n Name: " + fileName + "\n
Size: " + content.length + " Bytes");
}
Bindings
Input and output bindings provide a declarative way to connect to data from within your code. A function can
have multiple input and output bindings.
Input binding example
package com.example;
import com.microsoft.azure.functions.annotation.*;
To receive a batch of inputs, you can bind to String[] , POJO[] , List<String> , or List<POJO> .
@FunctionName("ProcessIotMessages")
public void processIotMessages(
@EventHubTrigger(name = "message", eventHubName = "%AzureWebJobsEventHubPath%", connection =
"AzureWebJobsEventHubSender", cardinality = Cardinality.MANY) List<TestEventData> messages,
final ExecutionContext context)
{
context.getLogger().info("Java Event Hub trigger received messages. Batch size: " +
messages.size());
}
This function gets triggered whenever there is new data in the configured event hub. Because the cardinality
is set to MANY , the function receives a batch of messages from the event hub. EventData from event hub gets
converted to TestEventData for the function execution.
Output binding example
You can bind an output binding to the return value by using $return .
package com.example;
import com.microsoft.azure.functions.annotation.*;
If there are multiple output bindings, use the return value for only one of them.
To send multiple output values, use OutputBinding<T> defined in the azure-functions-java-library package.
@FunctionName("QueueOutputPOJOList")
public HttpResponseMessage QueueOutputPOJOList(@HttpTrigger(name = "req", methods = { HttpMethod.GET,
HttpMethod.POST }, authLevel = AuthorizationLevel.ANONYMOUS)
HttpRequestMessage<Optional<String>> request,
@QueueOutput(name = "itemsOut", queueName = "test-output-java-pojo", connection =
"AzureWebJobsStorage") OutputBinding<List<TestData>> itemsOut,
final ExecutionContext context) {
context.getLogger().info("Java HTTP trigger processed a request.");
itemsOut.getValue().add(testData1);
itemsOut.getValue().add(testData2);
You invoke this function on an HttpRequest. It writes multiple values to Queue storage.
Metadata
Few triggers send trigger metadata along with input data. You can use annotation @BindingName to bind to
trigger metadata.
package com.example;
import java.util.Optional;
import com.microsoft.azure.functions.annotation.*;
In the preceding example, the queryValue is bound to the query string parameter name in the HTTP request
URL, http://{example.host}/api/metadata?name=test . Here's another example, showing how to bind to Id from
queue trigger metadata.
@FunctionName("QueueTriggerMetadata")
public void QueueTriggerMetadata(
@QueueTrigger(name = "message", queueName = "test-input-java-metadata", connection =
"AzureWebJobsStorage") String message,@BindingName("Id") String metadataId,
@QueueOutput(name = "output", queueName = "test-output-java-metadata", connection =
"AzureWebJobsStorage") OutputBinding<TestData> output,
final ExecutionContext context
) {
context.getLogger().info("Java Queue trigger function processed a message: " + message + " with
metadaId:" + metadataId );
TestData testData = new TestData();
testData.id = metadataId;
output.setValue(testData);
}
NOTE
The name provided in the annotation needs to match the metadata property.
Execution context
ExecutionContext , defined in the azure-functions-java-library , contains helper methods to communicate with
the functions runtime. For more information, see the ExecutionContext reference article.
Logger
Use getLogger , defined in ExecutionContext , to write logs from function code.
Example:
import com.microsoft.azure.functions.*;
import com.microsoft.azure.functions.annotation.*;
Bash
Cmd
To stream logging output for your function app by using the Azure CLI, open a new command prompt, Bash, or
Terminal session, and enter the following command:
Bash
Cmd
The az webapp log tail command has options to filter output by using the --provider option.
To download the log files as a single ZIP file by using the Azure CLI, open a new command prompt, Bash, or
Terminal session, and enter the following command:
You must have enabled file system logging in the Azure portal or the Azure CLI before running this command.
Environment variables
In Functions, app settings, such as service connection strings, are exposed as environment variables during
execution. You can access these settings by using, System.getenv("AzureWebJobsStorage") .
The following example gets the application setting, with the key named myAppSetting :
public class Function {
public String echo(@HttpTrigger(name = "req", methods = {HttpMethod.POST}, authLevel =
AuthorizationLevel.ANONYMOUS) String req, ExecutionContext context) {
context.getLogger().info("My app setting value: "+ System.getenv("myAppSetting"));
return String.format(req);
}
}
NOTE
The value of AppSetting FUNCTIONS_EXTENSION_VERSION should be ~2 or ~3 for an optimized cold start experience.
Next steps
For more information about Azure Functions Java development, see the following resources:
Best practices for Azure Functions
Azure Functions developer reference
Azure Functions triggers and bindings
Local development and debug with Visual Studio Code, IntelliJ, and Eclipse
Remote Debug Java functions using Visual Studio Code
Maven plugin for Azure Functions
Streamline function creation through the azure-functions:add goal, and prepare a staging directory for ZIP
file deployment.
Azure Functions PowerShell developer guide
8/2/2022 • 21 minutes to read • Edit Online
This article provides details about how you write Azure Functions using PowerShell.
A PowerShell Azure function (function) is represented as a PowerShell script that executes when triggered. Each
function script has a related function.json file that defines how the function behaves, such as how it's triggered
and its input and output parameters. To learn more, see the Triggers and binding article.
Like other kinds of functions, PowerShell script functions take in parameters that match the names of all the
input bindings defined in the function.json file. A TriggerMetadata parameter is also passed that contains
additional information on the trigger that started the function.
This article assumes that you have already read the Azure Functions developer reference. You should have also
completed the Functions quickstart for PowerShell to create your first PowerShell function.
Folder structure
The required folder structure for a PowerShell project looks like the following. This default can be changed. For
more information, see the scriptFile section below.
PSFunctionApp
| - MyFirstFunction
| | - run.ps1
| | - function.json
| - MySecondFunction
| | - run.ps1
| | - function.json
| - Modules
| | - myFirstHelperModule
| | | - myFirstHelperModule.psd1
| | | - myFirstHelperModule.psm1
| | - mySecondHelperModule
| | | - mySecondHelperModule.psd1
| | | - mySecondHelperModule.psm1
| - local.settings.json
| - host.json
| - requirements.psd1
| - profile.ps1
| - extensions.csproj
| - bin
At the root of the project, there's a shared host.json file that can be used to configure the function app. Each
function has a folder with its own code file (.ps1) and binding configuration file ( function.json ). The name of
the function.json file's parent directory is always the name of your function.
Certain bindings require the presence of an extensions.csproj file. Binding extensions, required in version 2.x
and later versions of the Functions runtime, are defined in the extensions.csproj file, with the actual library files
in the bin folder. When developing locally, you must register binding extensions. When developing functions in
the Azure portal, this registration is done for you.
In PowerShell Function Apps, you may optionally have a profile.ps1 which runs when a function app starts to
run (otherwise know as a cold start. For more information, see PowerShell profile.
Defining a PowerShell script as a function
By default, the Functions runtime looks for your function in run.ps1 , where run.ps1 shares the same parent
directory as its corresponding function.json .
Your script is passed a number of arguments on execution. To handle these parameters, add a param block to
the top of your script as in the following example:
# $TriggerMetadata is optional here. If you don't need it, you can safely remove it from the param block
param($MyFirstInputBinding, $MySecondInputBinding, $TriggerMetadata)
TriggerMetadata parameter
The TriggerMetadata parameter is used to supply additional information about the trigger. The additional
metadata varies from binding to binding but they all contain a sys property that contains the following data:
$TriggerMetadata.sys
Every trigger type has a different set of metadata. For example, the $TriggerMetadata for QueueTrigger contains
the InsertionTime , Id , DequeueCount , among other things. For more information on the queue trigger's
metadata, go to the official documentation for queue triggers. Check the documentation on the triggers you're
working with to see what comes inside the trigger metadata.
Bindings
In PowerShell, bindings are configured and defined in a function's function.json. Functions interact with bindings
a number of ways.
Reading trigger and input data
Trigger and input bindings are read as parameters passed to your function. Input bindings have a direction set
to in in function.json. The name property defined in function.json is the name of the parameter, in the param
block. Since PowerShell uses named parameters for binding, the order of the parameters doesn't matter.
However, it's a best practice to follow the order of the bindings defined in the function.json .
param($MyFirstInputBinding, $MySecondInputBinding)
param($MyFirstInputBinding, $MySecondInputBinding)
You can also pass in a value for a specific binding through the pipeline.
param($MyFirstInputBinding, $MySecondInputBinding)
Because the output is to HTTP, which accepts a singleton value only, an error is thrown when Push-OutputBinding
is called a second time.
For outputs that only accept singleton values, you can use the -Clobber parameter to override the old value
instead of trying to add to a collection. The following example assumes that you have already added a value. By
using -Clobber , the response from the following example overrides the existing value to return a value of
"output #3":
The output binding for a Storage queue accepts multiple output values. In this case, calling the following
example after the first writes to the queue a list with two items: "output #1" and "output #2".
The following example, when called after the previous two, adds two more values to the output collection:
When written to the queue, the message contains these four values: "output #1", "output #2", "output #3", and
"output #4".
Get-OutputBinding cmdlet
You can use the Get-OutputBinding cmdlet to retrieve the values currently set for your output bindings. This
cmdlet retrieves a hashtable that contains the names of the output bindings with their respective values.
The following is an example of using Get-OutputBinding to return current binding values:
Get-OutputBinding
Name Value
---- -----
MyQueue myData
MyOtherQueue myData
Get-OutputBinding also contains a parameter called -Name , which can be used to filter the returned binding, as
in the following example:
Name Value
---- -----
MyQueue myData
Logging
Logging in PowerShell functions works like regular PowerShell logging. You can use the logging cmdlets to write
to each output stream. Each cmdlet maps to a log level used by Functions.
Error Write-Error
Warning Write-Warning
Information Write-Information
Write-Host
Write-Output
Writes to the Information log level.
Debug Write-Debug
Trace Write-Progress
Write-Verbose
In addition to these cmdlets, anything written to the pipeline is redirected to the Information log level and
displayed with the default PowerShell formatting.
IMPORTANT
Using the Write-Verbose or Write-Debug cmdlets is not enough to see verbose and debug level logging. You must
also configure the log level threshold, which declares what level of logs you actually care about. To learn more, see
Configure the function app log level.
The following example sets the threshold to enable verbose logging for all functions, but sets the threshold to
enable debug logging for a function named MyFunction :
{
"logging": {
"logLevel": {
"Function.MyFunction": "Debug",
"default": "Trace"
}
}
}
{
"bindings": [
{
"type": "httpTrigger",
"direction": "in",
"authLevel": "anonymous"
},
{
"type": "http",
"direction": "out",
"name": "Response"
}
]
}
run.ps1
param($req, $TriggerMetadata)
$name = $req.Query.Name
param([string] $myBlob)
PowerShell profile
In PowerShell, there's the concept of a PowerShell profile. If you're not familiar with PowerShell profiles, see
About profiles.
In PowerShell Functions, the profile script is executed once per PowerShell worker instance in the app when first
deployed and after being idled (cold start. When concurrency is enabled by setting the
PSWorkerInProcConcurrencyUpperBound value, the profile script is run for each runspace created.
When you create a function app using tools, such as Visual Studio Code and Azure Functions Core Tools, a
default profile.ps1 is created for you. The default profile is maintained on the Core Tools GitHub repository
and contains:
Automatic MSI authentication to Azure.
The ability to turn on the Azure PowerShell AzureRM PowerShell aliases if you would like.
PowerShell versions
The following table shows the PowerShell versions available to each major version of the Functions runtime, and
the .NET version required:
{
"IsEncrypted": false,
"Values": {
"AzureWebJobsStorage": "",
"FUNCTIONS_WORKER_RUNTIME": "powershell",
"FUNCTIONS_WORKER_RUNTIME_VERSION" : "~7"
}
}
Portal
PowerShell
3. Choose your desired PowerShell Core version and select Save . When warned about the pending
restart choose Continue . The function app restarts on the chosen PowerShell version.
The function app restarts after the change is made to the configuration.
Dependency management
Functions lets you leverage PowerShell gallery for managing dependencies. With dependency management
enabled, the requirements.psd1 file is used to automatically download required modules. You enable this
behavior by setting the managedDependency property to true in the root of the host.json file, as in the following
example:
{
"managedDependency": {
"enabled": true
}
}
When you create a new PowerShell functions project, dependency management is enabled by default, with the
Azure Az module included. The maximum number of modules currently supported is 10. The supported syntax
is MajorNumber.* or exact module version, as shown in the following requirements.psd1 example:
@{
Az = '1.*'
SqlServer = '21.1.18147'
}
When you update the requirements.psd1 file, updated modules are installed after a restart.
Target specific versions
You may want to target a specific version of a module in your requirements.psd1 file. For example, if you wanted
to use an older version of Az.Accounts than the one in the included Az module, you would need to target a
specific version as shown in the following example:
@{
'Az.Accounts' = '1.9.5'
}
In this case, you also need to add an import statement to the top of your profile.ps1 file, which looks like the
following example:
In this way, the older version of the Az.Account module is loaded first when the function is started.
Dependency management considerations
The following considerations apply when using dependency management:
Managed dependencies requires access to https://www.powershellgallery.com to download modules.
When running locally, make sure that the runtime can access this URL by adding any required firewall
rules.
Managed dependencies currently don't support modules that require the user to accept a license, either
by accepting the license interactively, or by providing -AcceptLicense switch when invoking
Install-Module .
Essentially, your app upgrade starts within MDMaxBackgroundUpgradePeriod , and the upgrade process completes
within approximately the MDNewSnapshotCheckPeriod .
Custom modules
Leveraging your own custom modules in Azure Functions differs from how you would do it normally for
PowerShell.
On your local computer, the module gets installed in one of the globally available folders in your
$env:PSModulePath . When running in Azure, you don't have access to the modules installed on your machine.
This means that the $env:PSModulePath for a PowerShell function app differs from $env:PSModulePath in a
regular PowerShell script.
In Functions, PSModulePath contains two paths:
A Modules folder that exists at the root of your function app.
A path to a Modules folder that is controlled by the PowerShell language worker.
Function app-level modules folder
To use custom modules, you can place modules on which your functions depend in a Modules folder. From this
folder, modules are automatically available to the functions runtime. Any function in the function app can use
these modules.
NOTE
Modules specified in the requirements.psd1 file are automatically downloaded and included in the path so you don't need
to include them in the modules folder. These are stored locally in the $env:LOCALAPPDATA/AzureFunctions folder and in
the /data/ManagedDependencies folder when run in the cloud.
To take advantage of the custom module feature, create a Modules folder in the root of your function app. Copy
the modules you want to use in your functions to this location.
mkdir ./Modules
Copy-Item -Path /mymodules/mycustommodule -Destination ./Modules -Recurse
With a Modules folder, your function app should have the following folder structure:
PSFunctionApp
| - MyFunction
| | - run.ps1
| | - function.json
| - Modules
| | - MyCustomModule
| | - MyOtherCustomModule
| | - MySpecialModule.psm1
| - local.settings.json
| - host.json
| - requirements.psd1
When you start your function app, the PowerShell language worker adds this Modules folder to the
$env:PSModulePath so that you can rely on module autoloading just as you would in a regular PowerShell script.
Environment variables
In Functions, app settings, such as service connection strings, are exposed as environment variables during
execution. You can access these settings using $env:NAME_OF_ENV_VAR , as shown in the following example:
param($myTimer)
There are several ways that you can add, update, and delete function app settings:
In the Azure portal.
By using the Azure CLI.
By using Azure PowerShell.
Changes to function app settings require your function app to be restarted.
When running locally, app settings are read from the local.settings.json project file.
Concurrency
By default, the Functions PowerShell runtime can only process one invocation of a function at a time. However,
this concurrency level might not be sufficient in the following situations:
When you're trying to handle a large number of invocations at the same time.
When you have functions that invoke other functions inside the same function app.
There are a few concurrency models that you could explore depending on the type of workload:
Increase FUNCTIONS_WORKER_PROCESS_COUNT . This allows handling function invocations in multiple processes
within the same instance, which introduces certain CPU and memory overhead. In general, I/O-bound
functions will not suffer from this overhead. For CPU-bound functions, the impact may be significant.
Increase the PSWorkerInProcConcurrencyUpperBound app setting value. This allows creating multiple
runspaces within the same process, which significantly reduces CPU and memory overhead.
You set these environment variables in the app settings of your function app.
Depending on your use case, Durable Functions may significantly improve scalability. To learn more, see Durable
Functions application patterns.
NOTE
You might get "requests are being queued due to no available runspaces" warnings, please note that this is not an error.
The message is telling you that requests are being queued and they will be handled when the previous requests are
completed.
FunctionApp
| - host.json
| - myFunction
| | - function.json
| - lib
| | - PSFunction.ps1
In this case, the function.json for myFunction includes a scriptFile property referencing the file with the
exported function to run.
{
"scriptFile": "../lib/PSFunction.ps1",
"bindings": [
// ...
]
}
FunctionApp
| - host.json
| - myFunction
| | - function.json
| - lib
| | - PSFunction.psm1
function Invoke-PSTestFunc {
param($InputBinding, $TriggerMetadata)
In this example, the configuration for myFunction includes a scriptFile property that references
PSFunction.psm1 , which is a PowerShell module in another folder. The entryPoint property references the
Invoke-PSTestFunc function, which is the entry point in the module.
{
"scriptFile": "../lib/PSFunction.psm1",
"entryPoint": "Invoke-PSTestFunc",
"bindings": [
// ...
]
}
With this configuration, the Invoke-PSTestFunc gets executed exactly as a run.ps1 would.
Next steps
For more information, see the following resources:
Best practices for Azure Functions
Azure Functions developer reference
Azure Functions triggers and bindings
Azure Functions Python developer guide
8/2/2022 • 25 minutes to read • Edit Online
This article is an introduction to developing for Azure Functions by using Python. It assumes that you've already
read the Azure Functions developer guide.
As a Python developer, you might also be interested in one of the following articles:
NOTE
Although you can develop your Python-based functions locally on Windows, Python functions are supported in Azure
only when they're running on Linux. See the list of supported operating system/runtime combinations.
Programming model
Azure Functions expects a function to be a stateless method in your Python script that processes input and
produces output. By default, the runtime expects the method to be implemented as a global method called
main() in the _init_.py file. You can also specify an alternate entry point.
Data from triggers and bindings is bound to the function via method attributes that use the name property
defined in the function.json file. For example, the following function.json file describes a simple function
triggered by an HTTP request named req :
{
"scriptFile": "__init__.py",
"bindings": [
{
"authLevel": "function",
"type": "httpTrigger",
"direction": "in",
"name": "req",
"methods": [
"get",
"post"
]
},
{
"type": "http",
"direction": "out",
"name": "$return"
}
]
}
Based on this definition, the _init_.py file that contains the function code might look like the following example:
def main(req):
user = req.params.get('user')
return f'Hello, {user}!'
You can also explicitly declare the attribute types and return type in the function by using Python type
annotations. This action helps you to use the IntelliSense and autocomplete features that many Python code
editors provide.
import azure.functions
Use the Python annotations included in the azure.functions.* package to bind inputs and outputs to your
methods.
{
"scriptFile": "main.py",
"entryPoint": "customentry",
"bindings": [
...
]
}
Folder structure
The recommended folder structure for an Azure Functions project in Python looks like the following example:
<project_root>/
| - .venv/
| - .vscode/
| - my_first_function/
| | - __init__.py
| | - function.json
| | - example.py
| - my_second_function/
| | - __init__.py
| | - function.json
| - shared_code/
| | - __init__.py
| | - my_first_helper_function.py
| | - my_second_helper_function.py
| - tests/
| | - test_my_second_function.py
| - .funcignore
| - host.json
| - local.settings.json
| - requirements.txt
| - Dockerfile
The main project folder (<project_root>) can contain the following files:
local.settings.json: Used to store app settings and connection strings when functions are running locally. This
file isn't published to Azure. To learn more, see Local settings file.
requirements.txt: Contains the list of Python packages that the system installs when you're publishing to
Azure.
host.json: Contains configuration options that affect all functions in a function app instance. This file is
published to Azure. Not all options are supported when functions are running locally. To learn more, see the
host.json reference.
.vscode/: (Optional) Contains stored Visual Studio Code configurations. To learn more, see User and
Workspace Settings.
.venv/: (Optional) Contains a Python virtual environment that's used for local development.
Dockerfile: (Optional) Used when you're publishing your project in a custom container.
tests/: (Optional) Contains the test cases of your function app.
.funcignore: (Optional) Declares files that shouldn't be published to Azure. Usually, this file contains .vscode/
to ignore your editor setting, .venv/ to ignore the local Python virtual environment, tests/ to ignore test
cases, and local.settings.json to prevent local app settings from being published.
Each function has its own code file and binding configuration file (function.json).
When you deploy your project to a function app in Azure, the entire contents of the main project
(<project_root>) folder should be included in the package, but not the folder itself. That means host.json should
be in the package root. We recommend that you maintain your tests in a folder along with other functions. In
this example, the folder is tests/. For more information, see Unit testing.
Import behavior
You can import modules in your function code by using both absolute and relative references. Based on the
folder structure shown earlier, the following imports work from within the function file
<project_root>\my_first_function\__init__.py:
NOTE
The shared_code/ folder needs to contain an __init__.py file to mark it as a Python package when you're using absolute
import syntax.
The following __app__ import and beyond top-level relative import are deprecated. The static type checker and
the Python test frameworks don't support them.
// function.json
{
"scriptFile": "__init__.py",
"bindings": [
{
"name": "req",
"direction": "in",
"type": "httpTrigger",
"authLevel": "anonymous",
"route": "items/{id}"
},
{
"name": "obj",
"direction": "in",
"type": "blob",
"path": "samples/{id}",
"connection": "AzureWebJobsStorage"
}
]
}
// local.settings.json
{
"IsEncrypted": false,
"Values": {
"FUNCTIONS_WORKER_RUNTIME": "python",
"AzureWebJobsStorage": "<azure-storage-connection-string>"
}
}
# __init__.py
import azure.functions as func
import logging
When the function is invoked, the HTTP request is passed to the function as req . An entry will be retrieved from
Azure Blob Storage based on the ID in the route URL and made available as obj in the function body. Here, the
storage account specified is the connection string found in the AzureWebJobsStorage app setting, which is the
same storage account that the function app uses.
Outputs
Output can be expressed in the return value and in output parameters. If there's only one output, we
recommend using the return value. For multiple outputs, you'll have to use output parameters.
To use the return value of a function as the value of an output binding, set the name property of the binding to
$return in function.json.
To produce multiple outputs, use the set() method provided by the azure.functions.Out interface to assign a
value to the binding. For example, the following function can push a message to a queue and return an HTTP
response:
{
"scriptFile": "__init__.py",
"bindings": [
{
"name": "req",
"direction": "in",
"type": "httpTrigger",
"authLevel": "anonymous"
},
{
"name": "msg",
"direction": "out",
"type": "queue",
"queueName": "outqueue",
"connection": "AzureWebJobsStorage"
},
{
"name": "$return",
"direction": "out",
"type": "http"
}
]
}
import azure.functions as func
message = req.params.get('body')
msg.set(message)
return message
Logging
Access to the Azure Functions runtime logger is available via a root logging handler in your function app. This
logger is tied to Application Insights and allows you to flag warnings and errors that occur during the function
execution.
The following example logs an info message when the function is invoked via an HTTP trigger:
import logging
def main(req):
logging.info('Python HTTP trigger function processed a request.')
More logging methods are available that let you write to the console at different trace levels:
M ET H O D DESC RIP T IO N
NOTE
To use the OpenCensus Python extensions, you need to enable Python worker extensions in your function app by setting
PYTHON_ENABLE_WORKER_EXTENSIONS to 1 . You also need to switch to using the Application Insights connection string
by adding the APPLICATIONINSIGHTS_CONNECTION_STRING setting to your application settings, if it's not already there.
// requirements.txt
...
opencensus-extension-azure-functions
opencensus-ext-requests
import json
import logging
import requests
from opencensus.extension.azure.functions import OpenCensusExtension
from opencensus.trace import config_integration
config_integration.trace_integrations(['requests'])
OpenCensusExtension.configure()
return json.dumps({
'method': req.method,
'response': response.status_code,
'ctx_func_name': context.function_name,
'ctx_func_dir': context.function_directory,
'ctx_invocation_id': context.invocation_id,
'ctx_trace_context_Traceparent': context.trace_context.Traceparent,
'ctx_trace_context_Tracestate': context.trace_context.Tracestate,
'ctx_retry_context_RetryCount': context.retry_context.retry_count,
'ctx_retry_context_MaxRetryCount': context.retry_context.max_retry_count,
})
name = req.params.get('name')
if not name:
try:
req_body = req.get_json()
except ValueError:
pass
else:
name = req_body.get('name')
if name:
return func.HttpResponse(f"Hello {name}!", headers=headers)
else:
return func.HttpResponse(
"Please pass a name on the query string or in the request body",
headers=headers, status_code=400
)
In this function, the value of the name query parameter is obtained from the params parameter of the
HttpRequest object. The JSON-encoded message body is read using the get_json method.
Likewise, you can set the status_code and headers information for the response message in the returned
HttpResponse object.
Web frameworks
You can use WSGI and ASGI-compatible frameworks such as Flask and FastAPI with your HTTP-triggered Python
functions. This section shows how to modify your functions to support these frameworks.
First, the function.json file must be updated to include route in the HTTP trigger, as shown in the following
example:
{
"scriptFile": "__init__.py",
"bindings": [
{
"authLevel": "anonymous",
"type": "httpTrigger",
"direction": "in",
"name": "req",
"methods": [
"get",
"post"
],
"route": "{*route}"
},
{
"type": "http",
"direction": "out",
"name": "$return"
}
]
}
The host.json file must also be updated to include an HTTP routePrefix value, as shown in the following
example:
{
"version": "2.0",
"logging":
{
"applicationInsights":
{
"samplingSettings":
{
"isEnabled": true,
"excludedTypes": "Request"
}
}
},
"extensionBundle":
{
"id": "Microsoft.Azure.Functions.ExtensionBundle",
"version": "[2.*, 3.0.0)"
},
"extensions":
{
"http":
{
"routePrefix": ""
}
}
}
Update the Python code file init.py, based on the interface that your framework uses. The following example
shows either an ASGI handler approach or a WSGI wrapper approach for Flask:
ASGI
WSGI
app=fastapi.FastAPI()
@app.get("hello/{name}")
async def get_name(
name: str,):
return {
"name": name,}
Context
To get the invocation context of a function during execution, include the context argument in its signature.
For example:
import azure.functions
Global variables
It isn't guaranteed that the state of your app will be preserved for future executions. However, the Azure
Functions runtime often reuses the same process for multiple executions of the same app. To cache the results of
an expensive computation, declare it as a global variable:
CACHED_DATA = None
def main(req):
global CACHED_DATA
if CACHED_DATA is None:
CACHED_DATA = load_json()
Environment variables
In Azure Functions, application settings, such as service connection strings, are exposed as environment
variables during execution. There are two main ways to access these settings in your code:
M ET H O D DESC RIP T IO N
For local development, application settings are maintained in the local.settings.json file.
Python version
Azure Functions supports the following Python versions. These are official Python distributions.
F UN C T IO N S VERSIO N P Y T H O N VERSIO N S
4.x 3.9
3.8
3.7
3.x 3.9
3.8
3.7
3.6
2.x 3.7
3.6
To request a specific Python version when you create your function app in Azure, use the --runtime-version
option of the az functionapp create command. The --functions-version option sets the Azure Functions
runtime version.
Changing Python version
To set a Python function app to a specific language version, you need to specify the language and the version of
the language in linuxFxVersion field in site configuration. For example, to change Python app to use Python 3.8,
set linuxFxVersion to python|3.8 .
To learn more about the Azure Functions runtime support policy, see Language runtime support policy.
You can view and set linuxFxVersion from the Azure CLI by using the az functionapp config show command.
Replace <function_app> with the name of your function app. Replace <my_resource_group> with the name of the
resource group for your function app.
You see linuxFxVersion in the following output, which has been truncated for clarity:
{
...
"kind": null,
"limits": null,
"linuxFxVersion": <LINUX_FX_VERSION>,
"loadBalancing": "LeastRequests",
"localMySqlEnabled": false,
"location": "West US",
"logsDirectorySizeLimit": 35,
...
}
You can update the linuxFxVersion setting in the function app by using the az functionapp config set command.
In the following code:
Replace with the name of your function app.
<FUNCTION_APP>
Replace <RESOURCE_GROUP> with the name of the resource group for your function app.
Replace <LINUX_FX_VERSION> with the Python version that you want to use, prefixed by python| . For example:
python|3.9 .
You can run the command from Azure Cloud Shell by selecting Tr y it in the preceding code sample. You can also
use the Azure CLI locally to run the command after you use az login to sign in.
The function app restarts after you change the site configuration.
Local Python version
When running locally, the Azure Functions Core Tools uses the available Python version.
Package management
When you're developing locally by using the Azure Functions Core Tools or Visual Studio Code, add the names
and versions of the required packages to the requirements.txt file and install them by using pip .
For example, you can use the following requirements file and pip command to install the requests package
from PyPI:
requests==2.19.1
Publishing to Azure
When you're ready to publish, make sure that all your publicly available dependencies are listed in the
requirements.txt file. This file is at the root of your project directory.
You can also find project files and folders that are excluded from publishing, including the virtual environment
folder, in the root directory of your project.
Three build actions are supported for publishing your Python project to Azure: remote build, local build, and
builds that use custom dependencies.
You can also use Azure Pipelines to build your dependencies and publish by using continuous delivery (CD). To
learn more, see Continuous delivery by using Azure DevOps.
Remote build
When you use a remote build, dependencies restored on the server and native dependencies match the
production environment. This results in a smaller deployment package to upload. Use a remote build when
you're developing Python apps on Windows. If your project has custom dependencies, you can use a remote
build with an extra index URL.
Dependencies are obtained remotely based on the contents of the requirements.txt file. Remote build is the
recommended build method. By default, Azure Functions Core Tools requests a remote build when you use the
following func azure functionapp publish command to publish your Python project to Azure. Replace
<APP_NAME> with the name of your function app in Azure.
The Azure Functions extension for Visual Studio Code also requests a remote build by default.
Local build
Dependencies are obtained locally based on the contents of the requirements.txt file. You can prevent a remote
build by using the following func azure functionapp publish command to publish with a local build. Replace
<APP_NAME> with the name of your function app in Azure.
When you use the --build local option, project dependencies are read from the requirements.txt file. Those
dependent packages are downloaded and installed locally. Project files and dependencies are deployed from
your local computer to Azure. This results in the upload of a larger deployment package to Azure. If you can't get
the requirements.txt file by using Core Tools, you must use the custom dependencies option for publishing.
We don't recommend using local builds when you're developing locally on Windows.
Custom dependencies
When your project has dependencies not found in the Python Package Index, there are two ways to build the
project.
Remote build with extra index URL
When your packages are available from an accessible custom package index, use a remote build. Before
publishing, make sure to create an app setting named PIP_EXTRA_INDEX_URL . The value for this setting is the URL
of your custom package index. Using this setting tells the remote build to run pip install with the
--extra-index-url option. To learn more, see the Python pip install documentation.
You can also use basic authentication credentials with your extra package index URLs. To learn more, see Basic
authentication credentials in Python documentation.
NOTE
If you need to change the base URL of the Python Package Index from the default of https://pypi.org/simple , you
can do this by creating an app setting named PIP_INDEX_URL that points to a different package index URL. Like
PIP_EXTRA_INDEX_URL , PIP_INDEX_URL is a pip-specific application setting that changes the source for pip to use.
When you're using custom dependencies, use the following --no-build publishing option because you've
already installed the dependencies into the project folder. Replace <APP_NAME> with the name of your function
app in Azure.
Unit testing
You can test functions written in Python the same way that you test other Python code: through standard testing
frameworks. For most bindings, it's possible to create a mock input object by creating an instance of an
appropriate class from the azure.functions package. Because the azure.functions package isn't immediately
available, be sure to install it via your requirements.txt file as described in the earlier Package management
section.
Take my_second_function as an example. Following is a mock test of an HTTP triggered function.
First, to create the <project_root>/my_second_function/function.json file and define this function as an HTTP
trigger, use the following code:
{
"scriptFile": "__init__.py",
"entryPoint": "main",
"bindings": [
{
"authLevel": "function",
"type": "httpTrigger",
"direction": "in",
"name": "req",
"methods": [
"get",
"post"
]
},
{
"type": "http",
"direction": "out",
"name": "$return"
}
]
}
return func.HttpResponse(
body=f"{initial_value} * 2 = {doubled_value}",
status_code=200
)
# <project_root>/shared_code/__init__.py
# Empty __init__.py file marks shared_code folder as a Python package
# <project_root>/shared_code/my_second_helper_function.py
You can start writing test cases for your HTTP trigger:
# <project_root>/tests/test_my_second_function.py
import unittest
class TestFunction(unittest.TestCase):
def test_my_second_function(self):
# Construct a mock HTTP request.
req = func.HttpRequest(
method='GET',
body=None,
url='/api/my_second_function',
params={'value': '21'})
Inside your .venv Python virtual environment, install your favorite Python test framework, such as
pip install pytest . Then run pytest tests to check the test result.
Temporary files
The tempfile.gettempdir() method returns a temporary folder, which on Linux is /tmp. Your application can use
this directory to store temporary files that your functions generate and use during execution.
IMPORTANT
Files written to the temporary directory aren't guaranteed to persist across invocations. During scale-out, temporary files
aren't shared between instances.
The following example creates a named temporary file in the temporary directory (/tmp):
import logging
import azure.functions as func
import tempfile
from os import listdir
#---
tempFilePath = tempfile.gettempdir()
fp = tempfile.NamedTemporaryFile()
fp.write(b'Hello world!')
filesDirListInTemp = listdir(tempFilePath)
We recommend that you maintain your tests in a folder that's separate from the project folder. This action keeps
you from deploying test code with your app.
Preinstalled libraries
A few libraries come with the runtime for Azure Functions on Python.
Python Standard Library
The Python Standard Library contains a list of built-in Python modules that are shipped with each Python
distribution. Most of these libraries help you access system functionality, like file I/O. On Windows systems,
these libraries are installed with Python. On Unix systems, package collections provide them.
To view the full details of these libraries, use these links:
Python 3.6 Standard Library
Python 3.7 Standard Library
Python 3.8 Standard Library
Python 3.9 Standard Library
Worker dependencies
The Python worker for Azure Functions requires a specific set of libraries. You can also use these libraries in your
functions, but they aren't a part of the Python standard. If your functions rely on any of these libraries, they
might not be available to your code when you're running outside Azure Functions. You can find a detailed list of
dependencies in the install\_requires section in the setup.py file.
NOTE
If your function app's requirements.txt file contains an azure-functions-worker entry, remove it. The Azure Functions
platform automatically manages this worker, and we regularly update it with new features and bug fixes. Manually
installing an old version of the worker in requirements.txt might cause unexpected problems.
NOTE
If your package contains certain libraries that might collide with the worker's dependencies (for example, protobuf,
TensorFlow, or grpcio), configure PYTHON_ISOLATE_WORKER_DEPENDENCIES to 1 in app settings to prevent your
application from referring to the worker's dependencies. This feature is in preview.
SC O P E DESC RIP T IO N
Application level When the extension is imported into any function trigger, it
applies to every function execution in the app.
Function level Execution is limited to only the specific function trigger into
which it's imported.
Review the information for an extension to learn more about the scope in which the extension runs.
Extensions implement a Python worker extension interface. This action lets the Python worker process call into
the extension code during the function execution lifecycle.
Using extensions
You can use a Python worker extension library in your Python functions by following these basic steps:
1. Add the extension package in the requirements.txt file for your project.
2. Install the library into your app.
3. Add the application setting PYTHON_ENABLE_WORKER_EXTENSIONS :
To add the setting locally, add "PYTHON_ENABLE_WORKER_EXTENSIONS": "1" in the Values section of your
local.settings.json file.
To add the setting in Azure, add PYTHON_ENABLE_WORKER_EXTENSIONS=1 to your app settings.
4. Import the extension module into your function trigger.
5. Configure the extension instance, if needed. Configuration requirements should be called out in the
extension's documentation.
IMPORTANT
Microsoft doesn't support or warranty third-party Python worker extension libraries. Make sure that any extensions you
use in your function app are trustworthy. You bear the full risk of using a malicious or poorly written extension.
Third parties should provide specific documentation on how to install and consume their specific extension in
your function app. For a basic example of how to consume an extension, see Consuming your extension.
Here are examples of using extensions in a function app, by scope:
Application level
Function level
# <project_root>/requirements.txt
application-level-extension==1.0.0
# <project_root>/Trigger/__init__.py
Creating extensions
Extensions are created by third-party library developers who have created functionality that can be integrated
into Azure Functions. An extension developer designs, implements, and releases Python packages that contain
custom logic designed specifically to be run in the context of function execution. These extensions can be
published either to the PyPI registry or to GitHub repositories.
To learn how to create, package, publish, and consume a Python worker extension package, see Develop Python
worker extensions for Azure Functions.
Application-level extensions
An extension inherited from AppExtensionBase runs in an application scope.
AppExtensionBase exposes the following abstract class methods for you to implement:
M ET H O D DESC RIP T IO N
configure Called from function code when it's needed to configure the
extension.
post_function_load_app_level Called right after the function is loaded. The function name
and function directory are passed to the extension. Keep in
mind that the function directory is read-only. Any attempt
to write to a local file in this directory fails.
Function-level extensions
An extension that inherits from FuncExtensionBase runs in a specific function trigger.
FuncExtensionBase exposes the following abstract class methods for implementations:
M ET H O D DESC RIP T IO N
post_function_load Called right after the function is loaded. The function name
and function directory are passed to the extension. Keep in
mind that the function directory is read-only. Any attempt
to write to a local file in this directory fails.
Async
By default, a host instance for Python can process only one function invocation at a time. This is because Python
is a single-threaded runtime. For a function app that processes a large number of I/O events or is being I/O
bound, you can significantly improve performance by running functions asynchronously. For more information,
see Improve throughout performance of Python apps in Azure Functions.
Next steps
For more information, see the following resources:
Azure Functions package API documentation
Best practices for Azure Functions
Azure Functions triggers and bindings
Blob Storage bindings
HTTP and webhook bindings
Azure Queue Storage bindings
Timer trigger
Having issues? Let us know.
Azure Functions JavaScript developer guide
8/2/2022 • 26 minutes to read • Edit Online
This guide contains detailed information to help you succeed developing Azure Functions using JavaScript.
As an Express.js, Node.js, or JavaScript developer, if you're new to Azure Functions, please consider first reading
one of the following articles:
Folder structure
The required folder structure for a JavaScript project looks like the following. This default can be changed. For
more information, see the scriptFile section below.
FunctionsProject
| - MyFirstFunction
| | - index.js
| | - function.json
| - MySecondFunction
| | - index.js
| | - function.json
| - SharedCode
| | - myFirstHelperFunction.js
| | - mySecondHelperFunction.js
| - node_modules
| - host.json
| - package.json
| - extensions.csproj
At the root of the project, there's a shared host.json file that can be used to configure the function app. Each
function has a folder with its own code file (.js) and binding configuration file (function.json). The name of
function.json 's parent directory is always the name of your function.
The binding extensions required in version 2.x of the Functions runtime are defined in the extensions.csproj
file, with the actual library files in the bin folder. When developing locally, you must register binding
extensions. When developing functions in the Azure portal, this registration is done for you.
Exporting a function
JavaScript functions must be exported via module.exports (or exports ). Your exported function should be a
JavaScript function that executes when triggered.
By default, the Functions runtime looks for your function in index.js , where index.js shares the same parent
directory as its corresponding function.json . In the default case, your exported function should be the only
export from its file or the export named run or index . To configure the file location and export name of your
function, read about configuring your function's entry point below.
Your exported function is passed a number of arguments on execution. The first argument it takes is always a
context object.
2.x+
1.x
When using the async function declaration or plain JavaScript Promises in version 2.x, 3.x, or 4.x of the
Functions runtime, you don't need to explicitly call the context.done callback to signal that your function has
completed. Your function completes when the exported async function/Promise completes.
The following example is a simple function that logs that it was triggered and immediately completes execution.
When exporting an async function, you can also configure an output binding to take the return value. This is
recommended if you only have one output binding.
Returning from the function
To assign an output using return , change the name property to $return in function.json .
{
"type": "http",
"direction": "out",
"name": "$return"
}
In this case, your function should look like the following example:
Bindings
In JavaScript, bindings are configured and defined in a function's function.json. Functions interact with bindings
a number of ways.
Inputs
Input are divided into two categories in Azure Functions: one is the trigger input and the other is the additional
input. Trigger and other input bindings (bindings of direction === "in" ) can be read by a function in three
ways:
[Recommended] As parameters passed to your function. They're passed to the function in the
same order that they're defined in function.json. The name property defined in function.json doesn't need
to match the name of your parameter, although it should.
As members of the context.bindings object. Each member is named by the name property defined
in function.json.
Outputs
Outputs (bindings of direction === "out" ) can be written to by a function in a number of ways. In all cases, the
name property of the binding as defined in function.json corresponds to the name of the object member written
to in your function.
You can assign data to output bindings in one of the following ways (don't combine these methods):
[Recommended for multiple outputs] Returning an object. If you are using an async/Promise
returning function, you can return an object with assigned output data. In the example below, the output
bindings are named "httpResponse" and "queueOutput" in function.json.
[Recommended for single output] Returning a value directly and using the $return binding
name. This only works for async/Promise returning functions. See example in exporting an async
function.
Assigning values to context.bindings You can assign values directly to context.bindings.
context object
The runtime uses a context object to pass data to and from your function and the runtime. Used to read and
set data from bindings and for writing to logs, the context object is always the first parameter passed to a
function.
The context passed into your function exposes an executionContext property, which is an object with the
following properties:
context.bindings property
context.bindings
Returns a named object that is used to read or assign binding data. Input and trigger binding data can be
accessed by reading properties on context.bindings . Output binding data can be assigned by adding data to
context.bindings
For example, the following binding definitions in your function.json let you access the contents of a queue from
context.bindings.myInput and assign outputs to a queue using context.bindings.myOutput .
{
"type":"queue",
"direction":"in",
"name":"myInput"
...
},
{
"type":"queue",
"direction":"out",
"name":"myOutput"
...
}
// myInput contains the input data, which may have properties such as "name"
var author = context.bindings.myInput.name;
// Similarly, you can set your output data
context.bindings.myOutput = {
some_text: 'hello world',
a_number: 1 };
In a synchronous function, you can choose to define output binding data using the context.done method
instead of the context.binding object (see below).
context.bindingData property
context.bindingData
Returns a named object that contains trigger metadata and function invocation data ( invocationId ,
sys.methodName , sys.utcNow , sys.randGuid ). For an example of trigger metadata, see this event hubs example.
context.done method
2.x
1.x
In 2.x, 3.x, and 4.x, the function should be marked as async even if there's no awaited function call inside the
function, and the function doesn't need to call context.done to indicate the end of the function.
context.log method
context.log(message)
Allows you to write to the streaming function logs at the default trace level, with other logging levels available.
Trace logging is described in detail in the next section.
All context.log methods support the same parameter format that's supported by the Node.js util.format
method. Consider the following code, which writes function logs by using the default trace level:
You can also write the same code in the following format:
NOTE
Don't use console.log to write trace outputs. Because output from console.log is captured at the function app level,
it's not tied to a specific function invocation and isn't displayed in a specific function's logs. Also, version 1.x of the
Functions runtime doesn't support using console.log to write to the console.
Trace levels
In addition to the default level, the following logging methods are available that let you write function logs at
specific trace levels.
M ET H O D DESC RIP T IO N
The following example writes the same log at the warning trace level, instead of the info level:
Because error is the highest trace level, this trace is written to the output at all trace levels as long as logging is
enabled.
Configure the trace level for logging
Functions lets you define the threshold trace level for writing to the logs or the console. The specific threshold
settings depend on your version of the Functions runtime.
2.x+
2.x+
1.x
To set the threshold for traces written to the logs, use the logging.logLevel property in the host.json file. This
JSON object lets you define a default threshold for all functions in your function app, plus you can define
specific thresholds for individual functions. To learn more, see How to configure monitoring for Azure Functions.
// Use this with 'tagOverrides' to correlate custom telemetry to the parent function invocation.
var operationIdOverride = {"ai.operation.id":context.traceContext.traceparent};
The tagOverrides parameter sets the operation_Id to the function's invocation ID. This setting enables you to
correlate all of the automatically generated and custom telemetry for a given function invocation.
Response object
The context.res (response) object has the following properties:
cookies An array of HTTP cookie objects that are set in the response.
An HTTP cookie object has a name , value , and other
cookie properties, such as maxAge or sameSite .
// You can access your HTTP request off the context ...
if(context.req.body.emoji === ':pizza:') context.log('Yay!');
// and also set your HTTP response
context.res = { status: 202, body: 'You successfully ordered more coffee!' };
From the named input and output bindings. In this way, the HTTP trigger and bindings work the
same as any other binding. The following example sets the response object by using a named response
binding:
{
"type": "http",
"direction": "out",
"name": "response"
}
[Response only] By returning the response. A special binding name of $return allows you to
assign the function's return value to the output binding. The following HTTP output binding defines a
$return output parameter:
{
"type": "http",
"direction": "out",
"name": "$return"
}
2.x+
1.x
Node version
The following table shows current supported Node.js versions for each major version of the Functions runtime,
by operating system:
You can see the current version that the runtime is using by logging process.version from any function.
Setting the Node version
Windows
Linux
For Windows function apps, target the version in Azure by setting the WEBSITE_NODE_DEFAULT_VERSION app setting
to a supported LTS version, such as ~16 .
To learn more about Azure Functions runtime support policy, please refer to this article.
Dependency management
In order to use community libraries in your JavaScript code, as is shown in the below example, you need to
ensure that all dependencies are installed on your Function App in Azure.
NOTE
You should define a package.json file at the root of your Function App. Defining the file lets all functions in the app
share the same cached packages, which gives the best performance. If a version conflict arises, you can resolve it by
adding a package.json file in the folder of a specific function.
When deploying Function Apps from source control, any package.json file present in your repo, will trigger an
npm install in its folder during deployment. But when deploying via the Portal or CLI, you'll have to manually
install the packages.
There are two ways to install packages on your Function App:
Deploying with Dependencies
1. Install all requisite packages locally by running npm install .
2. Deploy your code, and ensure that the node_modules folder is included in the deployment.
Using Kudu (Windows only)
1. Go to https://<function_app_name>.scm.azurewebsites.net .
2. Select Debug Console > CMD .
3. Go to D:\home\site\wwwroot , and then drag your package.json file to the wwwroot folder at the top half
of the page.
You can upload files to your function app in other ways also. For more information, see How to update
function app files.
4. After the package.json file is uploaded, run the npm install command in the Kudu remote execution
console .
This action downloads the packages indicated in the package.json file and restarts the function app.
Environment variables
Add your own environment variables to a function app, in both your local and cloud environments, such as
operational secrets (connection strings, keys, and endpoints) or environmental settings (such as profiling
variables). Access these settings using process.env in your function code.
In local development environment
When running locally, your functions project includes a local.settings.json file, where you store your
environment variables in the Values object.
{
"IsEncrypted": false,
"Values": {
"AzureWebJobsStorage": "",
"FUNCTIONS_WORKER_RUNTIME": "node",
"translatorTextEndPoint": "https://api.cognitive.microsofttranslator.com/",
"translatorTextKey": "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx",
"languageWorkers__node__arguments": "--prof"
}
}
ECMAScript modules (ES modules) are the new official standard module system for Node.js. So far, the code
samples in this article use the CommonJS syntax. When running Azure Functions in Node.js 14 or higher, you
can choose to write your functions using ES modules syntax.
To use ES modules in a function, change its filename to use a .mjs extension. The following index.mjs file
example is an HTTP triggered function that uses ES modules syntax to import the uuid library and return a
value.
By default, a JavaScript function is executed from index.js , a file that shares the same parent directory as its
corresponding function.json .
scriptFile can be used to get a folder structure that looks like the following example:
FunctionApp
| - host.json
| - myNodeFunction
| | - function.json
| - lib
| | - sayHello.js
| - node_modules
| | - ... packages ...
| - package.json
The function.json for myNodeFunction should include a scriptFile property pointing to the file with the
exported function to run.
{
"scriptFile": "../lib/sayHello.js",
"bindings": [
...
]
}
Using entryPoint
In scriptFile (or index.js ), a function must be exported using module.exports in order to be found and run.
By default, the function that executes when triggered is the only export from that file, the export named run , or
the export named index .
This can be configured using entryPoint in function.json , as in the following example:
{
"entryPoint": "logFoo",
"bindings": [
...
]
}
In Functions v2.x or higher, which supports the this parameter in user functions, the function code could then
be as in the following example:
class MyObj {
constructor() {
this.foo = 1;
};
async logFoo(context) {
context.log("Foo is " + this.foo);
}
}
In this example, it's important to note that although an object is being exported, there are no guarantees for
preserving state between executions.
Local debugging
When started with the --inspect parameter, a Node.js process listens for a debugging client on the specified
port. In Azure Functions 2.x or higher, you can specify arguments to pass into the Node.js process that runs your
code by adding the environment variable or App Setting languageWorkers:node:arguments = <args> .
To debug locally, add "languageWorkers:node:arguments": "--inspect=5858" under Values in your
local.settings.json file and attach a debugger to port 5858.
When debugging using VS Code, the --inspect parameter is automatically added using the port value in the
project's launch.json file.
In version 1.x, setting languageWorkers:node:arguments won't work. The debug port can be selected with the
--nodeDebugPort parameter on Azure Functions Core Tools.
NOTE
You can only configure languageWorkers:node:arguments when running the function app locally.
Testing
Testing your functions includes:
HTTP end-to-end : To test a function from its HTTP endpoint, you can use any tool that can make an
HTTP request such as cURL, Postman, or JavaScript's fetch method.
Integration testing : Integration test includes the function app layer. This testing means you need to
control the parameters into the function including the request and the context. The context is unique to
each kind of trigger and means you need to know the incoming and outgoing bindings for that trigger
type.
Learn more about integration testing and mocking the context layer with an experimental GitHub repo,
https://github.com/anthonychu/azure-functions-test-utils.
Unit testing : Unit testing is performed within the function app. You can use any tool that can test
JavaScript, such as Jest or Mocha.
TypeScript
When you target version 2.x or higher of the Functions runtime, both Azure Functions for Visual Studio Code
and the Azure Functions Core Tools let you create function apps using a template that supports TypeScript
function app projects. The template generates package.json and tsconfig.json project files that make it easier
to transpile, run, and publish JavaScript functions from TypeScript code with these tools.
A generated .funcignore file is used to indicate which files are excluded when a project is published to Azure.
TypeScript files (.ts) are transpiled into JavaScript files (.js) in the dist output directory. TypeScript templates
use the scriptFile parameter in function.json to indicate the location of the corresponding .js file in the
dist folder. The output location is set by the template by using outDir parameter in the tsconfig.json file. If
you change this setting or the name of the folder, the runtime isn't able to find the code to run.
The way that you locally develop and deploy from a TypeScript project depends on your development tool.
Visual Studio Code
The Azure Functions for Visual Studio Code extension lets you develop your functions using TypeScript. The
Core Tools is a requirement of the Azure Functions extension.
To create a TypeScript function app in Visual Studio Code, choose TypeScript as your language when you create
a function app.
When you press F5 to run the app locally, transpilation is done before the host (func.exe) is initialized.
When you deploy your function app to Azure using the Deploy to function app... button, the Azure Functions
extension first generates a production-ready build of JavaScript files from the TypeScript source files.
Azure Functions Core Tools
There are several ways in which a TypeScript project differs from a JavaScript project when using the Core Tools.
Create project
To create a TypeScript function app project using Core Tools, you must specify the TypeScript language option
when you create your function app. You can do this in one of the following ways:
Run the func init command, select node as your language stack, and then select typescript .
Run the func init --worker-runtime typescript command.
Run local
To run your function app code locally using Core Tools, use the following commands instead of func host start
:
npm install
npm start
Publish to Azure
Before you use the func azure functionapp publish command to deploy to Azure, you create a production-
ready build of JavaScript files from the TypeScript source files.
The following commands prepare and publish your TypeScript project using Core Tools:
In this command, replace <APP_NAME> with the name of your function app.
When writing Azure Functions in JavaScript, you should write code using the async and await keywords.
Writing code using async and await instead of callbacks or .then and .catch with Promises helps avoid two
common problems:
Throwing uncaught exceptions that crash the Node.js process, potentially affecting the execution of other
functions.
Unexpected behavior, such as missing logs from context.log, caused by asynchronous calls that aren't
properly awaited.
In the example below, the asynchronous method fs.readFile is invoked with an error-first callback function as
its second parameter. This code causes both of the issues mentioned above. An exception that isn't explicitly
caught in the correct scope crashed the entire process (issue #1). Calling the 1.x context.done() outside of the
scope of the callback function means that the function invocation may end before the file is read (issue #2). In
this example, calling 1.x context.done() too early results in missing log entries starting with Data from file: .
// NOT RECOMMENDED PATTERN
const fs = require('fs');
Using the async and await keywords helps avoid both of these errors. You should use the Node.js utility
function util.promisify to turn error-first callback-style functions into awaitable functions.
In the example below, any unhandled exceptions thrown during the function execution only fail the individual
invocation that raised an exception. The await keyword means that steps following readFileAsync only execute
after readFile is complete. With async and await , you also don't need to call the context.done() callback.
// Recommended pattern
const fs = require('fs');
const util = require('util');
const readFileAsync = util.promisify(fs.readFile);
Next steps
For more information, see the following resources:
Best practices for Azure Functions
Azure Functions developer reference
Azure Functions triggers and bindings
Azure Functions developer guide
8/2/2022 • 20 minutes to read • Edit Online
In Azure Functions, specific functions share a few core technical concepts and components, regardless of the
language or binding you use. Before you jump into learning details specific to a given language or binding, be
sure to read through this overview that applies to all of them.
This article assumes that you've already read the Azure Functions overview.
Function code
A function is the primary concept in Azure Functions. A function contains two important pieces - your code,
which can be written in a variety of languages, and some config, the function.json file. For compiled languages,
this config file is generated automatically from annotations in your code. For scripting languages, you must
provide the config file yourself.
The function.json file defines the function's trigger, bindings, and other configuration settings. Every function has
one and only one trigger. The runtime uses this config file to determine the events to monitor and how to pass
data into and return data from a function execution. The following is an example function.json file.
{
"disabled":false,
"bindings":[
// ... bindings here
{
"type": "bindingType",
"direction": "in",
"name": "myParamName",
// ... more depending on binding
}
]
}
For more information, see Azure Functions triggers and bindings concepts.
The bindings property is where you configure both triggers and bindings. Each binding shares a few common
settings and some settings which are specific to a particular type of binding. Every binding requires the
following settings:
For example,
queueTrigger .
Function app
A function app provides an execution context in Azure in which your functions run. As such, it is the unit of
deployment and management for your functions. A function app is comprised of one or more individual
functions that are managed, deployed, and scaled together. All of the functions in a function app share the same
pricing plan, deployment method, and runtime version. Think of a function app as a way to organize and
collectively manage your functions. To learn more, see How to manage a function app.
NOTE
All functions in a function app must be authored in the same language. In previous versions of the Azure Functions
runtime, this wasn't required.
Folder structure
The code for all the functions in a specific function app is located in a root project folder that contains a host
configuration file. The host.json file contains runtime-specific configurations and is in the root folder of the
function app. A bin folder contains packages and other library files that the function app requires. Specific folder
structures required by the function app depend on language:
C# compiled (.csproj)
C# script (.csx)
F# script
Java
JavaScript
PowerShell
Python
In version 2.x and higher of the Functions runtime, all functions in the function app must share the same
language stack.
The above is the default (and recommended) folder structure for a Function app. If you wish to change the file
location of a function's code, modify the scriptFile section of the function.json file. We also recommend using
package deployment to deploy your project to your function app in Azure. You can also use existing tools like
continuous integration and deployment and Azure DevOps.
NOTE
If deploying a package manually, make sure to deploy your host.json file and function folders directly to the wwwroot
folder. Do not include the wwwroot folder in your deployments. Otherwise, you end up with wwwroot\wwwroot folders.
Parallel execution
When multiple triggering events occur faster than a single-threaded function runtime can process them, the
runtime may invoke the function multiple times in parallel. If a function app is using the Consumption hosting
plan, the function app could scale out automatically. Each instance of the function app, whether the app runs on
the Consumption hosting plan or a regular App Service hosting plan, might process concurrent function
invocations in parallel using multiple threads. The maximum number of concurrent function invocations in each
function app instance varies based on the type of trigger being used as well as the resources used by other
functions within the function app.
Repositories
The code for Azure Functions is open source and stored in GitHub repositories:
Azure Functions
Azure Functions host
Azure Functions portal
Azure Functions templates
Azure WebJobs SDK
Azure WebJobs SDK Extensions
Bindings
Here is a table of all supported bindings.
This table shows the bindings that are supported in the major versions of the Azure Functions runtime:
2. X A N D
TYPE 1. X H IGH ER 1 T RIGGER IN P UT O UT P UT
Blob storage ✔ ✔ ✔ ✔ ✔
Azure Cosmos ✔ ✔ ✔ ✔ ✔
DB
Azure SQL ✔ ✔ ✔
(preview)
Dapr3 ✔ ✔ ✔ ✔
2. X A N D
TYPE 1. X H IGH ER T RIGGER IN P UT O UT P UT
Event Grid ✔ ✔ ✔ ✔
Event Hubs ✔ ✔ ✔ ✔
HTTP & ✔ ✔ ✔ ✔
webhooks
IoT Hub ✔ ✔ ✔
Kafka2 ✔ ✔ ✔
Mobile Apps ✔ ✔ ✔
Notification ✔ ✔
Hubs
Queue storage ✔ ✔ ✔ ✔
RabbitMQ2 ✔ ✔ ✔
SendGrid ✔ ✔ ✔
Service Bus ✔ ✔ ✔ ✔
SignalR ✔ ✔ ✔ ✔
Table storage ✔ ✔ ✔ ✔
Timer ✔ ✔ ✔
Twilio ✔ ✔ ✔
1 Starting with the version 2.x runtime, all bindings except HTTP and Timer must be registered. See Register
binding extensions.
2 Triggers aren't supported in the Consumption plan. Requires runtime-driven triggers.
3 Supported only in Kubernetes, IoT Edge, and other self-hosted modes only.
Having issues with errors coming from the bindings? Review the Azure Functions Binding Error Codes
documentation.
Connections
Your function project references connection information by name from its configuration provider. It does not
directly accept the connection details, allowing them to be changed across environments. For example, a trigger
definition might include a connection property. This might refer to a connection string, but you cannot set the
connection string directly in a function.json . Instead, you would set connection to the name of an
environment variable that contains the connection string.
The default configuration provider uses environment variables. These might be set by Application Settings when
running in the Azure Functions service, or from the local settings file when developing locally.
Connection values
When the connection name resolves to a single exact value, the runtime identifies the value as a connection
string, which typically includes a secret. The details of a connection string are defined by the service to which
you wish to connect.
However, a connection name can also refer to a collection of multiple configuration items, useful for configuring
identity-based connections. Environment variables can be treated as a collection by using a shared prefix that
ends in double underscores __ . The group can then be referenced by setting the connection name to this prefix.
For example, the connection property for an Azure Blob trigger definition might be "Storage1". As long as there
is no single string value configured by an environment variable named "Storage1", an environment variable
named Storage1__blobServiceUri could be used to inform the blobServiceUri property of the connection. The
connection properties are different for each service. Refer to the documentation for the component that uses the
connection.
NOTE
When using Azure App Configuration or Key Vault to provide settings for Managed Identity connections, setting names
should use a valid key separator such as : or / in place of the __ to ensure names are resolved correctly.
For example, Storage1:blobServiceUri .
C O N N EC T IO N SO URC E P L A N S SUP P O RT ED L EA RN M O RE
Azure Blob triggers and bindings All Extension version 5.0.0 or later
Azure Queue triggers and bindings All Extension version 5.0.0 or later
Azure Event Hubs triggers and All Extension version 5.0.0 or later
bindings
Azure Service Bus triggers and All Extension version 5.0.0 or later
bindings
IMPORTANT
Some permissions might be exposed by the target service that are not necessary for all contexts. Where possible, adhere
to the principle of least privilege , granting the identity only required privileges. For example, if the app only needs to
be able to read from a data source, use a role that only has permission to read. It would be inappropriate to assign a role
that also allows writing to that service, as this would be excessive permission for a read operation. Similarly, you would
want to ensure the role assignment is scoped only over the resources that need to be read.
You will need to create a role assignment that provides access to your blob container at runtime. Management
roles like Owner are not sufficient. The following table shows built-in roles that are recommended when using
the Blob Storage extension in normal operation. Your application may require additional permissions based on
the code you write.
1 The blob triggerhandles failure across multiple retries by writing poison blobs to a queue on the storage
account specified by the connection.
2 The AzureWebJobsStorage connection is used internally for blobs and queues that enable the trigger. If it is
configured to use an identity-based connection, it will need additional permissions beyond the default
requirement. These are covered by the Storage Blob Data Owner, Storage Queue Data Contributor, and Storage
Account Contributor roles. To learn more, see Connecting to host storage with an identity.
Common properties for identity-based connections
An identity-based connection for an Azure service accepts the following common properties, where
<CONNECTION_NAME_PREFIX> is the value of your connection property in the trigger or binding definition:
Additional options may be supported for a given connection type. Please refer to the documentation for the
component making the connection.
L o c a l d e v e l o p m e n t w i t h i d e n t i t y - b a se d c o n n e c t i o n s
NOTE
Local development with identity-based connections requires updated versions of the Azure Functions Core Tools. You can
check your currently installed version by running func -v . For Functions v3, use version 3.0.3904 or later. For
Functions v4, use version 4.0.3904 or later.
When running locally, the above configuration tells the runtime to use your local developer identity. The
connection will attempt to get a token from the following locations, in order:
A local cache shared between Microsoft applications
The current user context in Visual Studio
The current user context in Visual Studio Code
The current user context in the Azure CLI
If none of these options are successful, an error will occur.
Your identity may already have some role assignments against Azure resources used for development, but those
roles may not provide the necessary data access. Management roles like Owner are not sufficient. Double-check
what permissions are required for connections for each component, and make sure that you have them
assigned to yourself.
In some cases, you may wish to specify use of a different identity. You can add configuration properties for the
connection that point to the alternate identity based on a client ID and client Secret for an Azure Active Directory
service principal. This configuration option is not suppor ted when hosted in the Azure Functions
ser vice. To use an ID and secret on your local machine, define the connection with the following additional
properties:
P RO P ERT Y EN VIRO N M EN T VA RIA B L E T EM P L AT E DESC RIP T IO N
Here is an example of local.settings.json properties required for identity-based connection to Azure Blobs:
{
"IsEncrypted": false,
"Values": {
"<CONNECTION_NAME_PREFIX>__blobServiceUri": "<blobServiceUri>",
"<CONNECTION_NAME_PREFIX>__queueServiceUri": "<queueServiceUri>",
"<CONNECTION_NAME_PREFIX>__tenantId": "<tenantId>",
"<CONNECTION_NAME_PREFIX>__clientId": "<clientId>",
"<CONNECTION_NAME_PREFIX>__clientSecret": "<clientSecret>"
}
}
Other components in Functions rely on "AzureWebJobsStorage" for default behaviors. You should not move it to
an identity-based connection if you are using older versions of extensions that do not support this type of
connection, including triggers and bindings for Azure Blobs, Event Hubs, and Durable Functions. Similarly,
AzureWebJobsStorage is used for deployment artifacts when using server-side build in Linux Consumption, and if
you enable this, you will need to deploy via an external deployment package.
In addition, some apps reuse "AzureWebJobsStorage" for other storage connections in their triggers, bindings,
and/or function code. Make sure that all uses of "AzureWebJobsStorage" are able to use the identity-based
connection format before changing this connection from a connection string.
To use an identity-based connection for "AzureWebJobsStorage", configure the following app settings:
You will need to create a role assignment that provides access to the storage account for
"AzureWebJobsStorage" at runtime. Management roles like Owner are not sufficient. The Storage Blob Data
Owner role covers the basic needs of Functions host storage - the runtime needs both read and write access to
blobs and the ability to create containers. Several extensions use this connection as a default location for blobs,
queues, and tables, and these uses may add requirements as noted in the table below. for You may need
additional permissions if you use "AzureWebJobsStorage" for any other purposes.
No extension (host only) Storage Blob Data Owner Used for general coordination, default
key store
Azure Blobs (trigger only) All of: The blob trigger internally uses Azure
Storage Account Contributor, Queues and writes blob receipts. It
Storage Blob Data Owner, uses AzureWebJobsStorage for these,
Storage Queue Data Contributor regardless of the connection
configured for the trigger.
Azure Event Hubs (trigger only) (no change from default requirement) Checkpoints are persisted in blobs
Storage Blob Data Owner using the AzureWebJobsStorage
connection.
Timer trigger (no change from default requirement) To ensure one execution per event,
Storage Blob Data Owner locks are taken with blobs using the
AzureWebJobsStorage connection.
Reporting Issues
IT EM DESC RIP T IO N L IN K
Next steps
For more information, see the following resources:
Azure Functions triggers and bindings
Code and test Azure Functions locally
Best Practices for Azure Functions
Azure Functions C# developer reference
Azure Functions Node.js developer reference
Code and test Azure Functions locally
8/2/2022 • 6 minutes to read • Edit Online
While you're able to develop and test Azure Functions in the Azure portal, many developers prefer a local
development experience. When you use Functions, using your favorite code editor and development tools to
create and test functions on your local computer becomes easier. Your local functions can connect to live Azure
services, and you can debug them on your local computer using the full Functions runtime.
This article provides links to specific development environments for your preferred language. It also provides
some shared guidance for local development, such as working with the local.settings.json file.
Visual Studio Code C# (class library) The Azure Functions extension for VS
C# isolated process (.NET 5.0) Code adds Functions support to VS
JavaScript Code. Requires the Core Tools.
PowerShell Supports development on Linux,
Python macOS, and Windows, when using
version 2.x of the Core Tools. To learn
more, see Create your first function
using Visual Studio Code.
Command prompt or terminal C# (class library) Azure Functions Core Tools provides
C# isolated process (.NET 5.0) the core runtime and templates for
JavaScript creating functions, which enable local
PowerShell development. Version 2.x supports
Python development on Linux, macOS, and
Windows. All environments rely on
Core Tools for the local Functions
runtime.
Visual Studio 2019 C# (class library) The Azure Functions tools are included
C# isolated process (.NET 5.0) in the Azure development workload
of Visual Studio 2019 and later
versions. Lets you compile functions in
a class library and publish the .dll to
Azure. Includes the Core Tools for local
testing. To learn more, see Develop
Azure Functions using Visual Studio.
Each of these local development environments lets you create function app projects and use predefined function
templates to create new functions. Each uses the Core Tools so that you can test and debug your functions
against the real Functions runtime on your own machine just as you would any other app. You can also publish
your function app project from any of these environments to Azure.
IMPORTANT
Because the local.settings.json may contain secrets, such as connection strings, you should never store it in a remote
repository. Tools that support Functions provide ways to synchronize settings in the local.settings.json file with the app
settings in the function app to which your project is deployed.
{
"IsEncrypted": false,
"Values": {
"FUNCTIONS_WORKER_RUNTIME": "<language worker>",
"AzureWebJobsStorage": "<connection-string>",
"MyBindingConnection": "<binding-connection-string>",
"AzureWebJobs.HttpExample.Disabled": "true"
},
"Host": {
"LocalHttpPort": 7071,
"CORS": "*",
"CORSCredentials": false
},
"ConnectionStrings": {
"SQLConnectionString": "<sqlclient-connection-string>"
}
}
IsEncrypted When this setting is set to true , all values are encrypted
with a local machine key. Used with func settings
commands. Default value is false . You might want to
encrypt the local.settings.json file on your local computer
when it contains secrets, such as service connection strings.
The host automatically decrypts settings when it runs. Use
the func settings decrypt command before trying to
read locally encrypted settings.
SET T IN G DESC RIP T IO N
LocalHttpPort Sets the default port used when running the local Functions
host ( func host start and func run ). The --port
command-line option takes precedence over this setting. For
example, when running in Visual Studio IDE, you may change
the port number by navigating to the "Project Properties ->
Debug" window and explicitly specifying the port number in
a host start --port <your-port-number> command that
can be supplied in the "Application Arguments" field.
The following application settings can be included in the Values array when running locally:
AzureWebJobsStorage Storage account connection string, or Contains the connection string for an
UseDevelopmentStorage=true Azure storage account. Required when
using triggers other than HTTP. For
more information, see the
AzureWebJobsStorage reference.
When you have the Azurite Emulator
installed locally and you set
AzureWebJobsStorage to
UseDevelopmentStorage=true , Core
Tools uses the emulator. The emulator
is useful during development, but you
should test with an actual storage
connection before deployment.
Next steps
To learn more about local development of compiled C# functions using Visual Studio 2019, see Develop
Azure Functions using Visual Studio.
To learn more about local development of functions using VS Code on a Mac, Linux, or Windows computer,
see the Visual Studio Code getting started article for your preferred language:
C# class library
C# isolated process (.NET 5.0)
Java
JavaScript
PowerShell
Python
TypeScript
To learn more about developing functions from the command prompt or terminal, see Work with Azure
Functions Core Tools.
Develop Azure Functions by using Visual Studio
Code
8/2/2022 • 33 minutes to read • Edit Online
The Azure Functions extension for Visual Studio Code lets you locally develop functions and deploy them to
Azure. If this experience is your first with Azure Functions, you can learn more at An introduction to Azure
Functions.
The Azure Functions extension provides these benefits:
Edit, build, and run functions on your local development computer.
Publish your Azure Functions project directly to Azure.
Write your functions in various languages while taking advantage of the benefits of Visual Studio Code.
The extension can be used with the following languages, which are supported by the Azure Functions runtime
starting with version 2.x:
C# compiled
C# script*
JavaScript
Java
PowerShell
Python
TypeScript
*Requires that you set C# script as your default project language.
In this article, examples are currently available only for JavaScript (Node.js) and C# class library functions.
This article provides details about how to use the Azure Functions extension to develop functions and publish
them to Azure. Before you read this article, you should create your first function by using Visual Studio Code.
IMPORTANT
Don't mix local development and portal development for a single function app. When you publish from a local project to a
function app, the deployment process overwrites any functions that you developed in the portal.
Prerequisites
Visual Studio Code installed on one of the supported platforms.
Azure Functions extension. You can also install the Azure Tools extension pack, which is recommended for
working with Azure resources.
An active Azure subscription. If you don't yet have an account, you can create one from the extension in
Visual Studio Code.
Run local requirements
These prerequisites are only required to run and debug your functions locally. They aren't required to create or
publish projects to Azure Functions.
C#
Java
JavaScript
PowerShell
Python
The Azure Functions Core Tools version 2.x or later. The Core Tools package is downloaded and installed
automatically when you start the project locally. Core Tools includes the entire Azure Functions runtime,
so download and installation might take some time.
The C# extension for Visual Studio Code.
.NET Core CLI tools.
2. Choose the directory location for your project workspace and choose Select . You should either create a
new folder or choose an empty folder for the project workspace. Don't choose a project folder that is
already part of a workspace.
3. When prompted, Select a language for your project, and if necessary choose a specific language
version.
4. Select the HTTP trigger function template, or you can select Skip for now to create a project without a
function. You can always add a function to your project later.
5. Type HttpExample for the function name and select Enter, and then select Function authorization. This
authorization level requires you to provide a function key when you call the function endpoint.
7. In Do you trust the authors of the files in this folder? window, select Yes .
8. A function is created in your chosen language and in the template for an HTTP-triggered function.
Generated project files
The project template creates a project in your chosen language and installs required dependencies. For any
language, the new project has these files:
host.json : Lets you configure the Functions host. These settings apply when you're running functions
locally and when you're running them in Azure. For more information, see host.json reference.
local.settings.json : Maintains settings used when you're running functions locally. These settings are
used only when you're running functions locally. For more information, see Local settings file.
IMPORTANT
Because the local.settings.json file can contain secrets, you need to exclude it from your project source control.
C#
Java
JavaScript
PowerShell
Python
Run the dotnet add package command in the Terminal window to install the extension packages that you need
in your project. The following example demonstrates how you add a binding for an in-process class library:
The following example demonstrates how you add a binding for an isolated-process class library:
In either case, replace <BINDING_TYPE_NAME> with the name of the package that contains the binding you need.
You can find the desired binding reference article in the list of supported bindings.
Replace <TARGET_VERSION> in the example with a specific version of the package, such as 3.0.0-beta5 . Valid
versions are listed on the individual package pages at NuGet.org. The major versions that correspond to
Functions runtime 1.x or 2.x are specified in the reference article for the binding.
C#
Java
JavaScript
PowerShell
Python
Connect to services
You can connect your function to other Azure services by adding input and output bindings. Bindings connect
your function to other services without you having to write the connection code. The process for adding
bindings depends on your project's language. To learn more about bindings, see Azure Functions triggers and
bindings concepts.
The following examples connect to a storage queue named outqueue , where the connection string for the
storage account is set in the MyStorageConnection application setting in local.settings.json.
C#
Java
JavaScript
PowerShell
Python
Update the function method to add the following parameter to the Run method definition:
The msg parameter is an ICollector<T> type, which represents a collection of messages that are written to an
output binding when the function completes. The following code adds a message to the collection:
Sign in to Azure
Before you can publish your app, you must sign in to Azure.
1. If you aren't already signed in, choose the Azure icon in the Activity bar. Then in the Resources area,
choose Sign in to Azure....
If you're already signed in and can see your existing subscriptions, go to the next section. If you don't yet
have an Azure account, choose Create and Azure Account.... Students can choose Create and Azure
for Students Account....
2. When prompted in the browser, choose your Azure account and sign in using your Azure account
credentials. If you create a new account, you can sign in after your account is created.
3. After you've successfully signed in, you can close the new browser window. The subscriptions that belong
to your Azure account are displayed in the sidebar.
P RO M P T SEL EC T IO N
Select subscription Choose the subscription to use. You won't see this
prompt when you have only one subscription visible
under Resources .
Enter a globally unique name for the function Type a name that is valid in a URL path. The name you
app type is validated to make sure that it's unique in Azure
Functions.
Select a runtime stack Choose the language version on which you've been
running locally.
Select a location for new resources For better performance, choose a region near you.
The extension shows the status of individual resources as they're being created in Azure in the Azure:
Activity Log panel.
3. When the creation is complete, the following Azure resources are created in your subscription. The
resources are named based on your function app name:
A resource group, which is a logical container for related resources.
A standard Azure Storage account, which maintains state and other information about your projects.
A function app, which provides the environment for executing your function code. A function app lets
you group functions as a logical unit for easier management, deployment, and sharing of resources
within the same hosting plan.
An App Service plan, which defines the underlying host for your function app.
An Application Insights instance connected to the function app, which tracks usage of your functions
in the app.
A notification is displayed after your function app is created and the deployment package is applied.
TIP
By default, the Azure resources required by your function app are created based on the function app name you
provide. By default, they're also created in the same new resource group with the function app. If you want to
either customize the names of these resources or reuse existing resources, you need to publish the project with
advanced create options instead.
P RO M P T SEL EC T IO N
Enter a globally unique name for the new function app. Type a globally unique name that identifies your new
function app and then select Enter. Valid characters for a
function app name are a-z , 0-9 , and - .
Select a runtime stack. Choose the language version on which you've been
running locally.
Select an OS. Choose either Linux or Windows. Python apps must run
on Linux
Select a resource group for new resources. Choose Create new resource group and type a
resource group name, like myResourceGroup , and then
select enter. You can also select an existing resource
group.
Select a location for new resources. Select a location in a region near you or near other
services that your functions access.
Select a storage account. Choose Create new storage account and at the
prompt, type a globally unique name for the new
storage account used by your function app and then
select Enter. Storage account names must be between 3
and 24 characters long and can contain only numbers
and lowercase letters. You can also select an existing
account.
Select an Application Insights resource for your app. Choose Create new Application Insights resource
and at the prompt, type a name for the instance used to
store runtime data from your functions.
A notification appears after your function app is created and the deployment package is applied. Select
View Output in this notification to view the creation and deployment results, including the Azure
resources that you created.
Get the URL of an HTTP triggered function in Azure
To call an HTTP-triggered function from a client, you need the URL of the function when it's deployed to your
function app. This URL includes any required function keys. You can use the extension to get these URLs for your
deployed functions. If you just want to run the remote function in Azure, use the Execute function now
functionality of the extension.
1. Select F1 to open the command palette, and then search for and run the command Azure Functions:
Copy Function URL .
2. Follow the prompts to select your function app in Azure and then the specific HTTP trigger that you want
to invoke.
The function URL is copied to the clipboard, along with any required keys passed by the code query parameter.
Use an HTTP tool to submit POST requests, or a browser for GET requests to the remote function.
When the extension gets the URL of functions in Azure, it uses your Azure account to automatically retrieve the
keys it needs to start the function. Learn more about function access keys. Starting non-HTTP triggered
functions requires using the admin key.
IMPORTANT
Deploying to an existing function app always overwrites the contents of that app in Azure.
1. Choose the Azure icon in the Activity bar, then in the Workspace area, select your project folder and
select the Deploy... button.
2. Select Deploy to Function App..., choose the function app you just created, and select Deploy .
3. After deployment completes, select View Output to view the creation and deployment results, including
the Azure resources that you created. If you miss the notification, select the bell icon in the lower right
corner to see it again.
Run functions
The Azure Functions extension lets you run individual functions. You can run functions either in your project on
your local development computer or in your Azure subscription.
For HTTP trigger functions, the extension calls the HTTP endpoint. For other kinds of triggers, it calls
administrator APIs to start the function. The message body of the request sent to the function depends on the
type of trigger. When a trigger requires test data, you're prompted to enter data in a specific JSON format.
Run functions in Azure.
To execute a function in Azure from Visual Studio Code.
1. In the command pallet, enter Azure Functions: Execute function now and choose your Azure
subscription.
2. Choose your function app in Azure from the list. If you don't see your function app, make sure you're
signed in to the correct subscription.
3. Choose the function you want to run from the list and type the message body of the request in Enter
request body . Press Enter to send this request message to your function. The default text in Enter
request body should indicate the format of the body. If your function app has no functions, a
notification error is shown with this error.
4. When the function executes in Azure and returns a response, a notification is raised in Visual Studio Code.
You can also run your function from the Azure: Functions area by right-clicking (Ctrl-clicking on Mac) the
function you want to run from your function app in your Azure subscription and choosing Execute Function
Now....
When you run your functions in Azure from Visual Studio Code, the extension uses your Azure account to
automatically retrieve the keys it needs to start the function. Learn more about function access keys. Starting
non-HTTP triggered functions requires using the admin key.
Run functions locally
The local runtime is the same runtime that hosts your function app in Azure. Local settings are read from the
local.settings.json file. To run your Functions project locally, you must meet more requirements.
Configure the project to run locally
The Functions runtime uses an Azure Storage account internally for all trigger types other than HTTP and
webhooks. So you need to set the Values.AzureWebJobsStorage key to a valid Azure Storage account
connection string.
This section uses the Azure Storage extension for Visual Studio Code with Azure Storage Explorer to connect to
and retrieve the storage connection string.
To set the storage account connection string:
1. In Visual Studio, open Cloud Explorer , expand Storage Account > Your Storage Account , and then
select Proper ties and copy the Primar y Connection String value.
2. In your project, open the local.settings.json file and set the value of the AzureWebJobsStorage key to
the connection string you copied.
3. Repeat the previous step to add unique keys to the Values array for any other connections required by
your functions.
For more information, see Local settings file.
Debug functions locally
To debug your functions, select F5. If you haven't already downloaded Core Tools, you're prompted to do so.
When Core Tools is installed and running, output is shown in the Terminal. This step is the same as running the
func start Core Tools command from the Terminal, but with extra build tasks and an attached debugger.
When the project is running, you can use the Execute Function Now... feature of the extension to trigger your
functions as you would when the project is deployed to Azure. With the project running in debug mode,
breakpoints are hit in Visual Studio Code as you would expect.
1. In the command pallet, enter Azure Functions: Execute function now and choose Local project .
2. Choose the function you want to run in your project and type the message body of the request in Enter
request body . Press Enter to send this request message to your function. The default text in Enter
request body should indicate the format of the body. If your function app has no functions, a
notification error is shown with this error.
3. When the function runs locally and after the response is received, a notification is raised in Visual Studio
Code. Information about the function execution is shown in Terminal panel.
Running functions locally doesn't require using keys.
Local settings
When running in a function app in Azure, settings required by your functions are stored securely in app settings.
During local development, these settings are instead added to the Values object in the local.settings.json file.
The local.settings.json file also stores settings used by local development tools.
Because the local.settings.json may contain secrets, such as connection strings, you should never store it in a
remote repository. To learn more about local settings, see Local settings file.
By default, these settings aren't migrated automatically when the project is published to Azure. After publishing
finishes, you're given the option of publishing settings from local.settings.json to your function app in Azure. To
learn more, see Publish application settings.
Values in ConnectionStrings are never published.
The function application settings values can also be read in your code as environment variables. For more
information, see the Environment variables sections of these language-specific reference articles:
C# precompiled
C# script (.csx)
Java
JavaScript
PowerShell
Python
You can also publish settings by using the Azure Functions: Upload Local Setting command in the
command palette. You can add individual settings to application settings in Azure by using the Azure
Functions: Add New Setting command.
TIP
Be sure to save your local.settings.json file before you publish it.
If the local file is encrypted, it's decrypted, published, and encrypted again. If there are settings that have
conflicting values in the two locations, you're prompted to choose how to proceed.
View existing app settings in the Azure: Functions area by expanding your subscription, your function app, and
Application Settings .
Download settings from Azure
If you've created application settings in Azure, you can download them into your local.settings.json file by using
the Azure Functions: Download Remote Settings command.
As with uploading, if the local file is encrypted, it's decrypted, updated, and encrypted again. If there are settings
that have conflicting values in the two locations, you're prompted to choose how to proceed.
Monitoring functions
When you run functions locally, log data is streamed to the Terminal console. You can also get log data when
your Functions project is running in a function app in Azure. You can connect to streaming logs in Azure to see
near-real-time log data. You should enable Application Insights for a more complete understanding of how your
function app is behaving.
Streaming logs
When you're developing an application, it's often useful to see logging information in near-real time. You can
view a stream of log files being generated by your functions. This output is an example of streaming logs for a
request to an HTTP-triggered function:
To learn more, see Streaming logs.
To turn on the streaming logs for your function app in Azure:
1. Select F1 to open the command palette, and then search for and run the command Azure Functions:
Star t Streaming Logs .
2. Select your function app in Azure, and then select Yes to enable application logging for the function app.
3. Trigger your functions in Azure. Notice that log data is displayed in the Output window in Visual Studio
Code.
4. When you're done, remember to run the command Azure Functions: Stop Streaming Logs to disable
logging for the function app.
NOTE
Streaming logs support only a single instance of the Functions host. When your function is scaled to multiple instances,
data from other instances isn't shown in the log stream. Live Metrics Stream in Application Insights does support multiple
instances. While also in near-real time, streaming analytics is based on sampled data.
Application Insights
We recommend that you monitor the execution of your functions by integrating your function app with
Application Insights. When you create a function app in the Azure portal, this integration occurs by default.
When you create your function app during Visual Studio publishing, you need to integrate Application Insights
yourself. To learn how, see Enable Application Insights integration.
To learn more about monitoring using Application Insights, see Monitor Azure Functions.
C# script projects
By default, all C# projects are created as C# compiled class library projects. If you prefer to work with C# script
projects instead, you must select C# script as the default language in the Azure Functions extension settings:
1. Select File > Preferences > Settings .
2. Go to User Settings > Extensions > Azure Functions .
3. Select C#Script from Azure Function: Project Language .
After you complete these steps, calls made to the underlying Core Tools include the --csx option, which
generates and publishes C# script (.csx) project files. When you have this default language specified, all projects
that you create default to C# script projects. You're not prompted to choose a project language when a default is
set. To create projects in other languages, you must change this setting or remove it from the user settings.json
file. After you remove this setting, you're again prompted to choose your language when you create a project.
Add New Settings Creates a new application setting in Azure. To learn more,
see Publish application settings. You might also need to
download this setting to your local settings.
Configure Deployment Source Connects your function app in Azure to a local Git
repository. To learn more, see Continuous deployment for
Azure Functions.
Copy Function URL Gets the remote URL of an HTTP-triggered function that's
running in Azure. To learn more, see Get the URL of the
deployed function.
Create function app in Azure Creates a new function app in your subscription in Azure. To
learn more, see the section on how to publish to a new
function app in Azure.
Decr ypt Settings Decrypts local settings that have been encrypted by Azure
Functions: Encr ypt Settings .
Delete Function App Removes a function app from your subscription in Azure.
When there are no other apps in the App Service plan,
you're given the option to delete that too. Other resources,
like storage accounts and resource groups, aren't deleted. To
remove all resources, you should instead delete the resource
group. Your local project isn't affected.
Delete Proxy Removes an Azure Functions proxy from your function app
in Azure. To learn more about proxies, see Work with Azure
Functions Proxies.
Delete Setting Deletes a function app setting in Azure. This deletion doesn't
affect settings in your local.settings.json file.
A Z URE F UN C T IO N S C O M M A N D DESC RIP T IO N
Download Remote Settings Downloads settings from the chosen function app in Azure
into your local.settings.json file. If the local file is encrypted,
it's decrypted, updated, and encrypted again. If there are
settings that have conflicting values in the two locations,
you're prompted to choose how to proceed. Be sure to save
changes to your local.settings.json file before you run this
command.
Encr ypt settings Encrypts individual items in the Values array in the local
settings. In this file, IsEncrypted is also set to true ,
which specifies that the local runtime will decrypt settings
before using them. Encrypt local settings to reduce the risk
of leaking valuable information. In Azure, application settings
are always stored encrypted.
Execute Function Now Manually starts a function using admin APIs. This command
is used for testing, both locally during debugging and
against functions running in Azure. When a function in Azure
starts, the extension first automatically obtains an admin
key, which it uses to call the remote admin APIs that start
functions in Azure. The body of the message sent to the API
depends on the type of trigger. Timer triggers don't require
you to pass any data.
Initialize Project for Use with VS Code Adds the required Visual Studio Code project files to an
existing Functions project. Use this command to work with a
project that you created by using Core Tools.
Install or Update Azure Functions Core Tools Installs or updates Azure Functions Core Tools, which is used
to run functions locally.
Rename Settings Changes the key name of an existing function app setting in
Azure. This command doesn't affect settings in your
local.settings.json file. After you rename settings in Azure,
you should download those changes to the local project.
Star t Streaming Logs Starts the streaming logs for the function app in Azure. Use
streaming logs during remote troubleshooting in Azure if
you need to see logging information in near-real time. To
learn more, see Streaming logs.
Stop Streaming Logs Stops the streaming logs for the function app in Azure.
Toggle as Slot Setting When enabled, ensures that an application setting persists
for a given deployment slot.
Uninstall Azure Functions Core Tools Removes Azure Functions Core Tools, which is required by
the extension.
Upload Local Settings Uploads settings from your local.settings.json file to the
chosen function app in Azure. If the local file is encrypted, it's
decrypted, uploaded, and encrypted again. If there are
settings that have conflicting values in the two locations,
you're prompted to choose how to proceed. Be sure to save
changes to your local.settings.json file before you run this
command.
View Commit in GitHub Shows you the latest commit in a specific deployment when
your function app is connected to a repository.
View Deployment Logs Shows you the logs for a specific deployment to the function
app in Azure.
Next steps
To learn more about Azure Functions Core Tools, see Work with Azure Functions Core Tools.
To learn more about developing functions as .NET class libraries, see Azure Functions C# developer reference.
This article also provides links to examples of how to use attributes to declare the various types of bindings
supported by Azure Functions.
Develop Azure Functions using Visual Studio
8/2/2022 • 23 minutes to read • Edit Online
Visual Studio lets you develop, test, and deploy C# class library functions to Azure. If this experience is your first
with Azure Functions, see An introduction to Azure Functions.
Visual Studio provides the following benefits when you develop your functions:
Edit, build, and run functions on your local development computer.
Publish your Azure Functions project directly to Azure, and create Azure resources as needed.
Use C# attributes to declare function bindings directly in the C# code.
Develop and deploy pre-compiled C# functions. Pre-complied functions provide a better cold-start
performance than C# script-based functions.
Code your functions in C# while having all of the benefits of Visual Studio development.
This article provides details about how to use Visual Studio to develop C# class library functions and publish
them to Azure. Before you read this article, consider completing the Functions quickstart for Visual Studio.
Unless otherwise noted, procedures and examples shown are for Visual Studio 2022.
Prerequisites
Azure Functions Tools. To add Azure Function Tools, include the Azure development workload in your
Visual Studio installation. If you're using Visual Studio 2017, you may need to follow some extra
installation steps.
Other resources that you need, such as an Azure Storage account, are created in your subscription during
the publishing process.
If you don't have an Azure subscription, create an Azure free account before you begin.
Make sure you set the Authorization level to Anonymous . If you choose the default level of Function ,
you're required to present the function key in requests to access your function endpoint.
5. Select Create to create the function project and HTTP trigger function.
After you create an Azure Functions project, the project template creates a C# project, installs the
Microsoft.NET.Sdk.Functions NuGet package, and sets the target framework. The new project has the following
files:
host.json : Lets you configure the Functions host. These settings apply both when running locally and in
Azure. For more information, see host.json reference.
local.settings.json : Maintains settings used when running functions locally. These settings aren't used
when running in Azure. For more information, see Local settings file.
IMPORTANT
Because the local.settings.json file can contain secrets, you must exclude it from your project source control. Make
sure the Copy to Output Director y setting for this file is set to Copy if newer .
Local settings
When running in a function app in Azure, settings required by your functions are stored securely in app settings.
During local development, these settings are instead added to the Values object in the local.settings.json file.
The local.settings.json file also stores settings used by local development tools.
Because the local.settings.json may contain secrets, such as connection strings, you should never store it in a
remote repository. To learn more about local settings, see Local settings file.
Visual Studio doesn't automatically upload the settings in local.settings.json when you publish the project. To
make sure that these settings also exist in your function app in Azure, upload them after you publish your
project. For more information, see Function app settings. The values in a ConnectionStrings collection are never
published.
Your code can also read the function app settings values as environment variables. For more information, see
Environment variables.
You'll then be prompted to choose between two Azure storage emulators or referencing a provisioned
Azure storage account.
This trigger example uses a connection string with a key named QueueStorage . This key, stored in the
local.settings.json file, either references the Azure storage emulators or an Azure storage account.
4. Examine the newly added class. You see a static Run() method that's attributed with the FunctionName
attribute. This attribute indicates that the method is the entry point for the function.
For example, the following C# class represents a basic Queue storage trigger function:
using System;
using Microsoft.Azure.WebJobs;
using Microsoft.Azure.WebJobs.Host;
using Microsoft.Extensions.Logging;
namespace FunctionApp1
{
public static class Function1
{
[FunctionName("QueueTriggerCSharp")]
public static void Run([QueueTrigger("myqueue-items",
Connection = "QueueStorage")]string myQueueItem, ILogger log)
{
log.LogInformation($"C# Queue trigger function processed: {myQueueItem}");
}
}
}
A binding-specific attribute is applied to each binding parameter supplied to the entry point method. The
attribute takes the binding information as parameters. In the previous example, the first parameter has a
QueueTrigger attribute applied, indicating a Queue storage trigger function. The queue name and connection
string setting name are passed as parameters to the QueueTrigger attribute. For more information, see Azure
Queue storage bindings for Azure Functions.
Use the above procedure to add more functions to your function app project. Each function in the project can
have a different trigger, but a function must have exactly one trigger. For more information, see Azure Functions
triggers and bindings concepts.
Add bindings
As with triggers, input and output bindings are added to your function as binding attributes. Add bindings to a
function as follows:
1. Make sure you've configured the project for local development.
2. Add the appropriate NuGet extension package for the specific binding by finding the binding-specific
NuGet package requirements in the reference article for the binding. For example, find package
requirements for the Event Hubs trigger in the Event Hubs binding reference article.
3. Use the following command in the Package Manager Console to install a specific package:
In-process
Isolated process
In this example, replace <BINDING_TYPE> with the name specific to the binding extension and
<TARGET_VERSION> with a specific version of the package, such as 3.0.0-beta5 . Valid versions are listed on
the individual package pages at NuGet.org. The major versions that correspond to Functions runtime 1.x
or 2.x are specified in the reference article for the binding.
4. If there are app settings that the binding needs, add them to the Values collection in the local setting file.
The function uses these values when it runs locally. When the function runs in the function app in Azure, it
uses the function app settings.
5. Add the appropriate binding attribute to the method signature. In the following example, a queue
message triggers the function, and the output binding creates a new queue message with the same text
in a different queue.
Publish to Azure
When you publish from Visual Studio, it uses one of the two deployment methods:
Web Deploy: Packages and deploys Windows apps to any IIS server.
Zip Deploy with run-From-package enabled: Recommended for Azure Functions deployments.
Use the following steps to publish your project to a function app in Azure.
1. In Solution Explorer , right-click the project and select Publish . In Target , select Azure then Next .
2. Select Azure Function App (Windows) for the Specific target , which creates a function app that runs
on Windows, and then select Next .
4. Create a new instance using the values specified in the following table:
Resource group Name of your resource group The resource group in which you
want to create your function app.
Select an existing resource group
from the drop-down list or select
New to create a new resource
group.
7. Select Finish , and on the Publish page, select Publish to deploy the package containing your project files
to your new function app in Azure.
After the deployment completes, the root URL of the function app in Azure is shown in the Publish tab.
8. In the Publish tab, in the Hosting section, choose Open in Azure por tal . This opens the new function
app Azure resource in the Azure portal.
Selecting this link displays the Application settings dialog for the function app, where you can add new
application settings or modify existing ones.
Local displays a setting value in the local.settings.json file, and Remote displays a current setting value in the
function app in Azure. Choose Add setting to create a new app setting. Use the Inser t value from Local link
to copy a setting value to the Remote field. Pending changes are written to the local settings file and the
function app when you select OK .
NOTE
By default, the local.settings.json file is not checked into source control. This means that if you clone a local Functions
project from source control, the project doesn't have a local.settings.json file. In this case, you need to manually create the
local.settings.json file in the project root so that the Application settings dialog works as expected.
You can also manage application settings in one of these other ways:
Use the Azure portal.
Use the --publish-local-settings publish option in the Azure Functions Core Tools.
Use the Azure CLI.
Remote Debugging
To debug your function app remotely, you must publish a debug configuration of your project. You also need to
enable remote debugging in your function app in Azure.
This section assumes you've already published to your function app using a release configuration.
Remote debugging considerations
Remote debugging isn't recommended on a production service.
If you have Just My Code debugging enabled, disable it.
Avoid long stops at breakpoints when remote debugging. Azure treats a process that is stopped for longer
than a few minutes as an unresponsive process, and shuts it down.
While you're debugging, the server is sending data to Visual Studio, which could affect bandwidth charges.
For information about bandwidth rates, see Azure Pricing.
Remote debugging is automatically disabled in your function app after 48 hours. After 48 hours, you'll need
to reenable remote debugging.
Attach the debugger
The way you attach the debugger depends on your execution mode. When debugging an isolated process app,
you currently need to attach the remote debugger to a separate .NET process, and several other configuration
steps are required.
When you're done, you should disable remote debugging.
In-process
Isolated process
To attach a remote debugger to a function app running in-process with the Functions host:
From the Publish tab, select the ellipses (...) in the Hosting section, and then choose Attach debugger .
Visual Studio connects to your function app and enables remote debugging, if it's not already enabled. It also
locates and attaches the debugger to the host process for the app. At this point, you can debug your function
app as normal.
Disable remote debugging
After you're done remote debugging your code, you should disable remote debugging in the Azure portal.
Remote debugging is automatically disabled after 48 hours, in case you forget.
1. In the Publish tab in your project, select the ellipses (...) in the Hosting section, and choose Open in
Azure por tal . This action opens the function app in the Azure portal to which your project is deployed.
2. In the functions app, select Configuration under settings , choose General Settings , set Remote
Debugging to Off , and select Save then Continue .
After the function app restarts, you can no longer remotely connect to your remote processes. You can use this
same tab in the Azure portal to enable remote debugging outside of Visual Studio.
Monitoring functions
The recommended way to monitor the execution of your functions is by integrating your function app with
Azure Application Insights. When you create a function app in the Azure portal, this integration is done for you
by default. However, when you create your function app during Visual Studio publishing, the integration in your
function app in Azure isn't done. To learn how to connect Application Insights to your function app, see Enable
Application Insights integration.
To learn more about monitoring using Application Insights, see Monitor Azure Functions.
Testing functions
This section describes how to create a C# function app project in Visual Studio and to run and test with xUnit.
Setup
To set up your environment, create a function and test the app. The following steps help you create the apps and
functions required to support the tests:
1. Create a new Functions app and name it Functions
2. Create an HTTP function from the template and name it MyHttpTrigger .
3. Create a timer function from the template and name it MyTimerTrigger .
4. Create an xUnit Test app in the solution and name it Functions.Tests . Remove the default test files.
5. Use NuGet to add a reference from the test app to Microsoft.AspNetCore.Mvc
6. Reference the Functions app from Functions.Tests app.
Create test classes
Now that the projects are created, you can create the classes used to run the automated tests.
Each function takes an instance of ILogger to handle message logging. Some tests either don't log messages or
have no concern for how logging is implemented. Other tests need to evaluate messages logged to determine
whether a test is passing.
You'll create a new class named ListLogger , which holds an internal list of messages to evaluate during testing.
To implement the required ILogger interface, the class needs a scope. The following class mocks a scope for the
test cases to pass to the ListLogger class.
Create a new class in Functions.Tests project named NullScope.cs and enter the following code:
using System;
namespace Functions.Tests
{
public class NullScope : IDisposable
{
public static NullScope Instance { get; } = new NullScope();
private NullScope() { }
Next, create a new class in Functions.Tests project named ListLogger.cs and enter the following code:
using Microsoft.Extensions.Logging;
using System;
using System.Collections.Generic;
using System.Text;
namespace Functions.Tests
{
public class ListLogger : ILogger
{
public IList<string> Logs;
public ListLogger()
{
this.Logs = new List<string>();
}
The ListLogger class implements the following members as contracted by the ILogger interface:
BeginScope : Scopes add context to your logging. In this case, the test just points to the static instance on
the NullScope class to allow the test to function.
IsEnabled : A default value of false is provided.
Log : This method uses the provided formatter function to format the message and then adds the
resulting text to the Logs collection.
The Logs collection is an instance of List<string> and is initialized in the constructor.
Next, create a new file in Functions.Tests project named LoggerTypes.cs and enter the following code:
namespace Functions.Tests
{
public enum LoggerTypes
{
Null,
List
}
}
namespace Functions.Tests
{
public class TestFactory
{
public static IEnumerable<object[]> Data()
{
return new List<object[]>
{
new object[] { "name", "Bill" },
new object[] { "name", "Paul" },
new object[] { "name", "Steve" }
};
}
if (type == LoggerTypes.List)
{
logger = new ListLogger();
}
else
{
logger = NullLoggerFactory.Instance.CreateLogger("Null Logger");
}
return logger;
}
}
}
Finally, create a new class in Functions.Tests project named FunctionsTests.cs and enter the following code:
using Microsoft.AspNetCore.Mvc;
using Microsoft.Extensions.Logging;
using Xunit;
namespace Functions.Tests
{
public class FunctionsTests
{
private readonly ILogger logger = TestFactory.CreateLogger();
[Fact]
public async void Http_trigger_should_return_known_string()
{
var request = TestFactory.CreateHttpRequest("name", "Bill");
var response = (OkObjectResult)await MyHttpTrigger.Run(request, logger);
Assert.Equal("Hello, Bill. This HTTP triggered function executed successfully.",
response.Value);
}
[Theory]
[MemberData(nameof(TestFactory.Data), MemberType = typeof(TestFactory))]
public async void Http_trigger_should_return_known_string_from_member_data(string queryStringKey,
string queryStringValue)
{
var request = TestFactory.CreateHttpRequest(queryStringKey, queryStringValue);
var response = (OkObjectResult)await MyHttpTrigger.Run(request, logger);
Assert.Equal($"Hello, {queryStringValue}. This HTTP triggered function executed successfully.",
response.Value);
}
[Fact]
public void Timer_should_log_message()
{
var logger = (ListLogger)TestFactory.CreateLogger(LoggerTypes.List);
new MyTimerTrigger().Run(null, logger);
var msg = logger.Logs[0];
Assert.Contains("C# Timer trigger function executed at", msg);
}
}
}
Debug tests
To debug the tests, set a breakpoint on a test, navigate to the Test Explorer and select Run > Debug Last
Run .
2. After the tools update is downloaded, select Close , and then close Visual Studio to trigger the tools
update with VSIX Installer.
3. In VSIX Installer, choose Modify to update the tools.
4. After the update is complete, choose Close , and then restart Visual Studio.
Next steps
For more information about the Azure Functions Core Tools, see Work with Azure Functions Core Tools.
For more information about developing functions as .NET class libraries, see Azure Functions C# developer
reference. This article also links to examples on how to use attributes to declare the various types of bindings
supported by Azure Functions.
Work with Azure Functions Core Tools
8/2/2022 • 21 minutes to read • Edit Online
Azure Functions Core Tools lets you develop and test your functions on your local computer from the command
prompt or terminal. Your local functions can connect to live Azure services, and you can debug your functions
on your local computer using the full Functions runtime. You can even deploy a function app to your Azure
subscription.
IMPORTANT
Do not mix local development with portal development in the same function app. When you create and publish functions
from a local project, you should not try to maintain or modify project code in the portal.
Developing functions on your local computer and publishing them to Azure using Core Tools follows these basic
steps:
Install the Core Tools and dependencies.
Create a function app project from a language-specific template.
Register trigger and binding extensions.
Define Storage and other connections.
Create a function from a trigger and language-specific template.
Run the function locally.
Publish the project to Azure.
Prerequisites
The specific prerequisites for Core Tools depend on the features you plan to use:
Publish : Core Tools currently depends on either the Azure CLI or Azure PowerShell for authenticating with your
Azure account. This means that you must install one of these tools to be able to publish to Azure from Azure
Functions Core Tools.
Install extensions : To manually install extensions by using Core Tools, you must have the .NET Core 3.1 SDK
installed. The .NET Core SDK is used by Core Tools to install extensions from NuGet. You don't need to know .NET
to use Azure Functions extensions.
Supports version 4.x of the Functions runtime. This version supports Windows, macOS, and Linux, and uses
platform-specific package managers or npm for installation. This is the recommended version of the Functions
runtime and Core Tools.
You can only install one version of Core Tools on a given computer. Unless otherwise noted, the examples in this
article are for version 3.x.
The following steps use a Windows installer (MSI) to install Core Tools v4.x. For more information about other
package-based installers, see the Core Tools readme.
Download and run the Core Tools installer, based on your version of Windows:
v4.x - Windows 64-bit (Recommended. Visual Studio Code debugging requires 64-bit.)
v4.x - Windows 32-bit
If you used Windows installer (MSI) to install Core Tools on Windows, you should uninstall the old version from
Add Remove Programs before installing a different version.
F IL E N A M E DESC RIP T IO N
.vscode\extensions.json Settings file used when opening the project folder in Visual
Studio Code.
To learn more about the Functions project folder, see the Azure Functions developers guide.
In the terminal window or from a command prompt, run the following command to create the project and local
Git repository:
This example creates a Functions project in a new MyFunctionProj folder. You are prompted to choose a default
language for your project.
The following considerations apply to project initialization:
If you don't provide the --worker-runtime option in the command, you're prompted to choose your
language. For more information, see the func init reference.
When you don't provide a project name, the current folder is initialized.
If you plan to publish your project to a custom Linux container, use the --docker option to make sure
that a Dockerfile is generated for your project. To learn more, see Create a function on Linux using a
custom image.
Certain languages may have additional considerations:
C#
Java
JavaScript
PowerShell
Python
TypeScript
By default, version 2.x and later versions of the Core Tools create function app projects for the .NET
runtime as C# class projects (.csproj). Version 3.x also supports creating functions that run on .NET 5.0 in
an isolated process. These C# projects, which can be used with Visual Studio or Visual Studio Code, are
compiled during debugging and when publishing to Azure.
Use the --csx parameter if you want to work locally with C# script (.csx) files. These are the same files
you get when you create functions in the Azure portal and when using version 1.x of Core Tools. To learn
more, see the func init reference.
Register extensions
Starting with runtime version 2.x, Functions triggers and bindings are implemented as .NET extension (NuGet)
packages. For compiled C# projects, you simply reference the NuGet extension packages for the specific triggers
and bindings you are using. HTTP bindings and timer triggers don't require extensions.
To improve the development experience for non-C# projects, Functions lets you reference a versioned extension
bundle in your host.json project file. Extension bundles makes all extensions available to your app and removes
the chance of having package compatibility issues between extensions. Extension bundles also removes the
requirement of installing the .NET Core 3.1 SDK and having to deal with the extensions.csproj file.
Extension bundles is the recommended approach for functions projects other than C# complied projects, as well
as C# script. For these projects, the extension bundle setting is generated in the host.json file during
initialization. If bundles aren't enabled, you need to update the project's host.json file.
The easiest way to install binding extensions is to enable extension bundles. When you enable bundles, a
predefined set of extension packages is automatically installed.
To enable extension bundles, open the host.json file and update its contents to match the following code:
{
"version": "2.0",
"extensionBundle": {
"id": "Microsoft.Azure.Functions.ExtensionBundle",
"version": "[2.*, 3.0.0)"
}
}
Local settings
When running in a function app in Azure, settings required by your functions are stored securely in app settings.
During local development, these settings are instead added to the Values object in the local.settings.json file.
The local.settings.json file also stores settings used by local development tools.
Because the local.settings.json may contain secrets, such as connection strings, you should never store it in a
remote repository. To learn more about local settings, see Local settings file.
By default, these settings are not migrated automatically when the project is published to Azure. Use the
--publish-local-settings option when you publish to make sure these settings are added to the function app in
Azure. Values in the ConnectionStrings section are never published.
The function app settings values can also be read in your code as environment variables. For more information,
see the Environment variables section of these language-specific reference topics:
C# precompiled
C# script (.csx)
Java
JavaScript
PowerShell
Python
When no valid storage connection string is set for AzureWebJobsStorage and the emulator isn't being used, the
following error message is shown:
Missing value for AzureWebJobsStorage in local.settings.json. This is required for all triggers other than
HTTP. You can run 'func azure functionapp fetch-app-settings <functionAppName>' or specify a connection
string in local.settings.json.
Portal
Core Tools
Storage Explorer
1. From the Azure portal, search for and select Storage accounts .
2. Select your storage account, select Access keys in Settings , then copy one of the Connection string
values.
Create a function
To create a function in an existing project, run the following command:
func new
In version 3.x/2.x, when you run func new you are prompted to choose a template in the default language of
your function app. Next, you're prompted to choose a name for your function. In version 1.x, you are also
required to choose the language.
You can also specify the function name and template in the func new command. The following example uses the
--template option to create an HTTP trigger named MyHttpTrigger :
C#
Java
JavaScript
PowerShell
Python
TypeScript
func start
NOTE
Version 1.x of the Functions runtime instead requires func host start . To learn more, see Azure Functions Core Tools
reference.
When the Functions host starts, it outputs the URL of HTTP-triggered functions, like in the following example:
IMPORTANT
When running locally, authorization isn't enforced for HTTP endpoints. This means that all local HTTP requests are handled
as authLevel = "anonymous" . For more information, see the HTTP binding article.
NOTE
Examples in this topic use the cURL tool to send HTTP requests from the terminal or a command prompt. You can use a
tool of your choice to send HTTP requests to the local server. The cURL tool is available by default on Linux-based systems
and Windows 10 build 17063 and later. On older Windows, you must first download and install the cURL tool.
For more general information on testing functions, see Strategies for testing your code in Azure Functions.
HTTP and webhook triggered functions
You call the following endpoint to locally run HTTP and webhook triggered functions:
http://localhost:{port}/api/{function_name}
Make sure to use the same server name and port that the Functions host is listening on. You see this in the
output generated when starting the Function host. You can call this URL using any HTTP method supported by
the trigger.
The following cURL command triggers the MyHttpTrigger quickstart function from a GET request with the name
parameter passed in the query string.
The following example is the same function called from a POST request passing name in the request body:
Bash
Cmd
You can make GET requests from a browser passing data in the query string. For all other HTTP methods, you
must use cURL, Fiddler, Postman, or a similar HTTP testing tool that supports POST requests.
Non-HTTP triggered functions
For all functions other than HTTP and Event Grid triggers, you can test your functions locally using REST by
calling a special endpoint called an administration endpoint. Calling this endpoint with an HTTP POST request on
the local server triggers the function.
To test Event Grid triggered functions locally, see Local testing with viewer web app.
You can optionally pass test data to the execution in the body of the POST request. This functionality is similar to
the Test tab in the Azure portal.
You call the following administrator endpoint to trigger non-HTTP functions:
http://localhost:{port}/admin/functions/{function_name}
To pass test data to the administrator endpoint of a function, you must supply the data in the body of a POST
request message. The message body is required to have the following JSON format:
{
"input": "<trigger_input>"
}
The <trigger_input> value contains data in a format expected by the function. The following cURL example is a
POST to a QueueTriggerJS function. In this case, the input is a string that is equivalent to the message expected
to be found in the queue.
Bash
Cmd
When you call an administrator endpoint on your function app in Azure, you must provide an access key. To
learn more, see Function access keys.
Publish to Azure
The Azure Functions Core Tools supports two types of deployment:
Project files func azure functionapp publish Deploys function project files directly
to your function app using zip
deployment.
Kubernetes cluster func kubernetes deploy Deploys your Linux function app as a
custom Docker container to a
Kubernetes cluster.
IMPORTANT
You must have the Azure CLI or Azure PowerShell installed locally to be able to publish to Azure from Core Tools.
A project folder may contain language-specific files and directories that shouldn't be published. Excluded items
are listed in a .funcignore file in the root project folder.
You must have already created a function app in your Azure subscription, to which you'll deploy your code.
Projects that require compilation should be built so that the binaries can be deployed.
To learn how to create a function app from the command prompt or terminal window using the Azure CLI or
Azure PowerShell, see Create a Function App for serverless execution.
IMPORTANT
When you create a function app in the Azure portal, it uses version 3.x of the Function runtime by default. To make the
function app use version 1.x of the runtime, follow the instructions in Run on version 1.x. You can't change the runtime
version for a function app that has existing functions.
Install extensions
If you aren't able to use extension bundles, you can use Azure Functions Core Tools locally to install the specific
extension packages required by your project.
IMPORTANT
You can't explicitly install extensions in a function app with extension bundles enabled. First, remove the
extensionBundle section in host.json before explicitly installing extensions.
The following items describe some reasons you might need to install extensions manually:
You need to access a specific version of an extension not available in a bundle.
You need to access a custom extension not available in a bundle.
You need to access a specific combination of extensions not available in a single bundle.
When you explicitly install extensions, a .NET project file named extensions.csproj is added to the root of your
project. This file defines the set of NuGet packages required by your functions. While you can work with the
NuGet package references in this file, Core Tools lets you install extensions without having to manually edit this
C# project file.
There are several ways to use Core Tools to install the required extensions in your local project.
Install all extensions
Use the following command to automatically add all extension packages used by the bindings in your local
project:
The command reads the function.json file to see which packages you need, installs them, and rebuilds the
extensions project (extensions.csproj). It adds any new bindings at the current version but doesn't update
existing bindings. Use the --force option to update existing bindings to the latest version when installing new
ones. To learn more, see the func extensions install command.
If your function app uses bindings or NuGet packages that Core Tools does not recognize, you must manually
install the specific extension.
Install a specific extension
Use the following command to install a specific extension package at a specific version, in this case the Storage
extension:
You can use this command to install any compatible NuGet package. To learn more, see the
func extensions install command.
Monitoring functions
The recommended way to monitor the execution of your functions is by integrating with Azure Application
Insights. You can also stream execution logs to your local computer. To learn more, see Monitor Azure Functions.
Application Insights integration
Application Insights integration should be enabled when you create your function app in Azure. If for some
reason your function app isn't connected to an Application Insights instance, it's easy to do this integration in the
Azure portal. To learn more, see Enable Application Insights integration.
Enable streaming logs
You can view a stream of log files being generated by your functions in a command-line session on your local
computer.
Built-in log streaming
Use the func azure functionapp logstream command to start receiving streaming logs of a specific function app
running in Azure, as in the following example:
NOTE
Built-in log streaming isn't yet enabled in Core Tools for function apps running on Linux in a Consumption plan. For these
hosting plans, you instead need to use Live Metrics Stream to view the logs in near-real time.
This type of streaming logs requires that Application Insights integration be enabled for your function app.
Next steps
Learn how to develop, test, and publish Azure Functions by using Azure Functions Core Tools Microsoft learn
module Azure Functions Core Tools is open source and hosted on GitHub.
To file a bug or feature request, open a GitHub issue.
Azure Function Event Grid Blob Trigger
8/2/2022 • 4 minutes to read • Edit Online
This article demonstrates how to debug and deploy a local Event Grid Blob triggered function that handles
events raised by a storage account.
NOTE
The Event Grid Blob trigger is in preview.
Prerequisites
Create or use an existing function app
Create or use an existing storage account
Have version 5.0+ of the Microsoft.Azure.WebJobs.Extensions.Storage extension installed
Download ngrok to allow Azure to call your local function
http://localhost:7071/runtime/webhooks/blobs?functionName={functionname}
Note your function app's name and that the trigger type is a blob trigger, which is indicated by blobs in
the url. This will be needed when setting up endpoints later in the how to guide.
4. Once the function is created, add the Event Grid source parameter.
C#
Python
Java
As the utility is set up, the command window should look similar to the following screenshot:
Copy the HTTPS URL generated when ngrok is run. This value is used when configuring the event grid event
endpoint.
4. Once the endpoint type is configured, click on Select an endpoint to configure the endpoint value.
The Subscriber Endpoint value is made up from three different values. The prefix is the HTTPS URL
generated by ngrok. The remainder of the URL comes from the localhost URL copied earlier in the how to
guide, with the function name added at the end. Starting with the localhost URL, the ngrok URL replaces
http://localhost:7071 and the function name replaces {functionname} .
5. The following screenshot shows an example of how the final URL should look when using an Event Grid
trigger type.
Upload a file
Now you can upload a file to your storage account to trigger an Event Grid event for your local function to
handle.
Open Storage Explorer and connect it to your storage account.
Expand Blob Containers
Right-click and select Create Blob Container .
Name the container samples-workitems
Select the samples-workitems container
Click the Upload button
Click Upload Files
Select a file and upload it to the blob container
Deployment
As you deploy the function app to Azure, update the webhook endpoint from your local endpoint to your
deployed app endpoint. To update an endpoint, follow the steps in Add a storage event and use the below for
the webhook URL in step 5. The <BLOB-EXTENSION-KEY> can be found in the App Keys section from the left menu
of your Function App .
C#
Python
Java
https://<FUNCTION-APP-NAME>.azurewebsites.net/runtime/webhooks/blobs?functionName=<FUNCTION-NAME>&code=
<BLOB-EXTENSION-KEY>
Clean up resources
To clean up the resources created in this article, delete the event grid subscription you created in this tutorial.
Next steps
Automate resizing uploaded images using Event Grid
Event Grid trigger for Azure Functions
Create your first function in the Azure portal
8/2/2022 • 7 minutes to read • Edit Online
Azure Functions lets you run your code in a serverless environment without having to first create a virtual
machine (VM) or publish a web application. In this article, you learn how to use Azure Functions to create a
"hello world" HTTP trigger function in the Azure portal.
NOTE
In-portal editing is only supported for JavaScript, PowerShell, TypeScript, and C# Script functions.
For C# class library, Java, and Python functions, you can create the function app in the portal, but you must also create
the functions locally and then publish them to Azure.
We recommend that you develop your functions locally and publish to a function app in Azure.
Use one of the following links to get started with your chosen local development environment and language:
C# script .NET ✓ ✓ ✓
JavaScript Node.js ✓ ✓ ✓
Python Python ✓ ✓
Java Java ✓ ✓
TypeScript Node.js ✓ ✓
1
1 In the portal, you can't currently create function apps that run on .NET 5.0. For more information on .NET 5
functions, see Develop and publish .NET 5 functions using Azure Functions.
For more information on operating system and language support, see Operating system/runtime support.
When in-portal editing isn't available, you must instead develop your functions locally.
Prerequisites
If you don't have an Azure subscription, create an Azure free account before you begin.
Sign in to Azure
Sign in to the Azure portal with your Azure account.
Function App name Globally unique name Name that identifies your new
function app. Valid characters are
a-z (case insensitive), 0-9 , and
- .
5. Select Next : Monitoring . On the Monitoring page, enter the following settings:
3. Under Template details use HttpExample for New Function , select Anonymous from the
Authorization level drop-down list, and then select Create .
Azure creates the HTTP trigger function. Now, you can run the new function by sending an HTTP request.
3. Paste the function URL into your browser's address bar. Add the query string value ?name=<your_name> to
the end of this URL and press Enter to run the request. The browser must display a response message
that echoes back your query string value.
If the request URL included an access key ( ?code=... ), it means you selected Function instead of
Anonymous access level when creating the function. In this case, you must instead append
&name=<your_name> .
4. When your function runs, trace information is written to the logs. To see the trace output, return to the
Code + Test page in the portal and expand the Logs arrow at the bottom of the page. Call your function
again to see the trace output written to the logs.
Clean up resources
Other quickstarts in this collection build upon this quickstart. If you plan to work with subsequent quickstarts,
tutorials, or with any of the services you've created in this quickstart, don't clean up the resources.
Resources in Azure refer to function apps, functions, storage accounts, and so forth. They're grouped into
resource groups, and you can delete everything in a group by deleting the group.
You've created resources to complete these quickstarts. You might be billed for these resources, depending on
your account status and service pricing. If you don't need the resources anymore, here's how to delete them:
1. In the Azure portal, go to the Resource group page.
To get to that page from the function app page, select the Over view tab, and then select the link under
Resource group .
To get to that page from the dashboard, select Resource groups , and then select the resource group
that you used for this article.
2. In the Resource group page, review the list of included resources, and verify that they're the ones you
want to delete.
3. Select Delete resource group and follow the instructions.
Deletion might take a couple of minutes. When it's done, a notification appears for a few seconds. You can
also select the bell icon at the top of the page to view the notification.
Next steps
Now that you've created your first function, let's add an output binding to the function that writes a message to
a Storage queue.
Add messages to an Azure Storage queue using Functions
Quickstart: Create a C# function in Azure from the
command line
8/2/2022 • 9 minutes to read • Edit Online
In this article, you use command-line tools to create a C# function that responds to HTTP requests. After testing
the code locally, you deploy it to the serverless environment of Azure Functions.
This article supports creating both types of compiled C# functions:
Isolated process Your function code runs in a separate .NET worker process.
Check out .NET supported versions before getting started.
To learn more, see Develop isolated process functions in C#.
This article creates an HTTP triggered function that runs on .NET 6.0. There is also a Visual Studio Code-based
version of this article.
Completing this quickstart incurs a small cost of a few USD cents or less in your Azure account.
In a terminal or command window, run func --version to check that the Azure Functions Core Tools are
version 4.x.
Run dotnet --list-sdks to check that the required versions are installed.
Run az --version to check that the Azure CLI version is 2.4 or later.
Run az login to sign in to Azure and verify an active subscription.
cd LocalFunctionProj
This folder contains various files for the project, including configurations files named local.settings.json
and host.json. Because local.settings.json can contain secrets downloaded from Azure, the file is excluded
from source control by default in the .gitignore file.
3. Add a function to your project by using the following command, where the --name argument is the
unique name of your function (HttpExample) and the --template argument specifies the function's
trigger (HTTP).
HttpExample.cs contains a Run method that receives request data in the req variable is an HttpRequest that's
decorated with the HttpTriggerAttribute , which defines the trigger behavior.
using System;
using System.IO;
using System.Threading.Tasks;
using Microsoft.AspNetCore.Mvc;
using Microsoft.Azure.WebJobs;
using Microsoft.Azure.WebJobs.Extensions.Http;
using Microsoft.AspNetCore.Http;
using Microsoft.Extensions.Logging;
using Newtonsoft.Json;
namespace LocalFunctionProj
{
public static class HttpExample
{
[FunctionName("HttpExample")]
public static async Task<IActionResult> Run(
[HttpTrigger(AuthorizationLevel.Anonymous, "get", "post", Route = null)] HttpRequest req,
ILogger log)
{
log.LogInformation("C# HTTP trigger function processed a request.");
The return object is an ActionResult that returns a response message as either an OkObjectResult (200) or a
BadRequestObjectResult (400).
To learn more, see Azure Functions HTTP triggers and bindings.
func start
Toward the end of the output, the following lines should appear:
...
Http Functions:
2. Copy the URL of your HttpExample function from this output to a browser:
In-process
Isolated process
To the function URL, append the query string , making the full URL like
?name=<YOUR_NAME>
http://localhost:7071/api/HttpExample?name=Functions . The browser should display a response message
that echoes back your query string value. The terminal in which you started your project also shows log
output as you make requests.
3. When you're done, use Ctrl +C and choose y to stop the functions host.
Azure CLI
Azure PowerShell
az login
Azure CLI
Azure PowerShell
The az group create command creates a resource group. In the above command, replace <REGION> with a
region near you, using an available region code returned from the az account list-locations command.
3. Create a general-purpose storage account in your resource group and region:
Azure CLI
Azure PowerShell
az storage account create --name <STORAGE_NAME> --location <REGION> --resource-group
AzureFunctionsQuickstart-rg --sku Standard_LRS
Azure CLI
Azure PowerShell
NOTE
This command creates a function app using the 3.x version of the Azure Functions runtime. The
func azure functionapp publish command that you'll run later updates the app to version 4.x.
In the previous example, replace <STORAGE_NAME> with the name of the account you used in the previous
step, and replace <APP_NAME> with a globally unique name appropriate to you. The <APP_NAME> is also the
default DNS domain for the function app.
This command creates a function app running in your specified language runtime under the Azure
Functions Consumption Plan, which is free for the amount of usage you incur here. The command also
provisions an associated Azure Application Insights instance in the same resource group, with which you
can monitor your function app and view logs. For more information, see Monitor Azure Functions. The
instance incurs no costs until you activate it.
The publish command shows results similar to the following output (truncated for simplicity):
...
...
Deployment successful.
Remote build succeeded!
Syncing triggers...
Functions in msdocs-azurefunctions-qs:
HttpExample - [httpTrigger]
Invoke url: https://msdocs-azurefunctions-qs.azurewebsites.net/api/httpexample
In-process
Isolated process
Copy the complete Invoke URL shown in the output of the publish command into a browser address bar,
appending the query parameter ?name=Functions . When you navigate to this URL, the browser should display
similar output as when you ran the function locally.
Run the following command to view near real-time streaming logs:
In a separate terminal window or in the browser, call the remote function again. A verbose log of the function
execution in Azure is shown in the terminal.
Clean up resources
If you continue to the next step and add an Azure Storage queue output binding, keep all your resources in place
as you'll build on what you've already done.
Otherwise, use the following command to delete the resource group and all its contained resources to avoid
incurring further costs.
Azure CLI
Azure PowerShell
Next steps
In-process
Isolated process
Azure Functions lets you use Visual Studio to create local C# function projects and then easily publish this
project to run in a scalable serverless environment in Azure. If you prefer to develop your C# apps locally using
Visual Studio Code, you should instead consider the Visual Studio Code-based version of this article.
By default, this article shows you how to create C# functions that run on .NET 6 in the same process as the
Functions host. These in-process C# functions are only supported on Long Term Support (LTS) version of .NET,
such as .NET 6. To create C# functions on .NET 6 that can also run on .NET 5.0 and .NET Framework 4.8 (in
preview) in an isolated process, see the alternate version of this article.
In this article, you learn how to:
Use Visual Studio to create a C# class library project on .NET 6.0.
Create a function that responds to HTTP requests.
Run your code locally to verify function behavior.
Deploy your code project to Azure Functions.
Completing this quickstart incurs a small cost of a few USD cents or less in your Azure account.
Prerequisites
Visual Studio 2022, which supports .NET 6.0. Make sure to select the Azure development workload
during installation.
Azure subscription. If you don't already have an account create a free one before you begin.
Make sure you set the Authorization level to Anonymous . If you choose the default level of Function ,
you're required to present the function key in requests to access your function endpoint.
5. Select Create to create the function project and HTTP trigger function.
Visual Studio creates a project and class that contains boilerplate code for the HTTP trigger function type. The
boilerplate code sends an HTTP response that includes a value from the request body or query string. The
HttpTrigger attribute specifies that the function is triggered by an HTTP request.
Your function definition should now look like the following code:
.NET 6
.NET 6 Isolated
[FunctionName("HttpExample")]
public static async Task<IActionResult> Run(
[HttpTrigger(AuthorizationLevel.Anonymous, "get", "post", Route = null)] HttpRequest req,
ILogger log)
Now that you've renamed the function, you can test it on your local computer.
2. Select Azure Function App (Windows) for the Specific target , which creates a function app that runs
on Windows, and then select Next .
3. In the Function Instance , choose Create a new Azure Function...
4. Create a new instance using the values specified in the following table:
Resource group Name of your resource group The resource group in which you
want to create your function app.
Select an existing resource group
from the drop-down list or select
New to create a new resource
group.
7. Select Finish , and on the Publish page, select Publish to deploy the package containing your project files
to your new function app in Azure.
After the deployment completes, the root URL of the function app in Azure is shown in the Publish tab.
8. In the Publish tab, in the Hosting section, choose Open in Azure por tal . This opens the new function
app Azure resource in the Azure portal.
4. Go to this URL and you see a response in the browser to the remote GET request returned by the
function, which looks like the following example:
Clean up resources
Resources in Azure refer to function apps, functions, storage accounts, and so forth. They're grouped into
resource groups, and you can delete everything in a group by deleting the group.
You created Azure resources to complete this quickstart. You may be billed for these resources, depending on
your account status and service pricing. Other quickstarts in this collection build upon this quickstart. If you plan
to work with subsequent quickstarts, tutorials, or with any of the services you have created in this quickstart,
don't clean up the resources.
Use the following steps to delete the function app and its related resources to avoid incurring any further costs.
1. In the Visual Studio Publish dialogue, in the Hosting section, select Open in Azure por tal .
2. In the function app page, select the Over view tab and then select the link under Resource group .
3. In the Resource group page, review the list of included resources, and verify that they're the ones you
want to delete.
4. Select Delete resource group , and follow the instructions.
Deletion may take a couple of minutes. When it's done, a notification appears for a few seconds. You can
also select the bell icon at the top of the page to view the notification.
Next steps
In this quickstart, you used Visual Studio to create and publish a C# function app in Azure with a simple HTTP
trigger function.
.NET 6
.NET 6 Isolated
To learn more about working with C# functions that run in-process with the Functions host, see Develop C#
class library functions using Azure Functions.
Advance to the next article to learn how to add an Azure Storage queue binding to your function:
Add an Azure Storage queue binding to your function
Quickstart: Create a C# function in Azure using
Visual Studio Code
8/2/2022 • 8 minutes to read • Edit Online
In this article, you use Visual Studio Code to create a C# function that responds to HTTP requests. After testing
the code locally, you deploy it to the serverless environment of Azure Functions. This article creates an HTTP
triggered function that runs on .NET 6.0. There's also a CLI-based version of this article.
By default, this article shows you how to create C# functions that runs on .NET 6 in the same process as the
Functions host. These in-process C# functions are only supported on Long Term Support (LTS) versions of .NET,
such as .NET 6. To create C# functions on .NET 6 that can also run on .NET 5.0 and .NET Framework 4.8 (in
preview) in an isolated process, see the alternate version of this article.
Completing this quickstart incurs a small cost of a few USD cents or less in your Azure account.
P RO M P T SEL EC T IO N
Select a template for your project's first function Choose HTTP trigger .
Select how you would like to open your project Select Add to workspace .
NOTE
If you don't see .NET 6 as a runtime option, check the following:
Make sure you have installed the .NET 6.0 SDK.
Press F1 and type Preferences: Open user settings , then search for
Azure Functions: Project Runtime and change the default runtime version to ~4 .
4. Visual Studio Code uses the provided information and generates an Azure Functions project with an HTTP
trigger. You can view the local project files in the Explorer. For more information about the files that are
created, see Generated project files.
3. In the Enter request body , press Enter to send a request message to your function.
4. When the function executes locally and returns a response, a notification is raised in Visual Studio Code.
Information about the function execution is shown in the Terminal panel.
5. Press Ctrl + C to stop Core Tools and disconnect the debugger.
After checking that the function runs correctly on your local computer, it's time to use Visual Studio Code to
publish the project directly to Azure.
Sign in to Azure
Before you can publish your app, you must sign in to Azure.
1. If you aren't already signed in, choose the Azure icon in the Activity bar. Then in the Resources area,
choose Sign in to Azure....
If you're already signed in and can see your existing subscriptions, go to the next section. If you don't yet
have an Azure account, choose Create and Azure Account.... Students can choose Create and Azure
for Students Account....
2. When prompted in the browser, choose your Azure account and sign in using your Azure account
credentials. If you create a new account, you can sign in after your account is created.
3. After you've successfully signed in, you can close the new browser window. The subscriptions that belong
to your Azure account are displayed in the sidebar.
P RO M P T SEL EC T IO N
Select subscription Choose the subscription to use. You won't see this
prompt when you have only one subscription visible
under Resources .
Enter a globally unique name for the function Type a name that is valid in a URL path. The name you
app type is validated to make sure that it's unique in Azure
Functions.
P RO M P T SEL EC T IO N
Select a runtime stack Choose the language version on which you've been
running locally.
Select a location for new resources For better performance, choose a region near you.
The extension shows the status of individual resources as they're being created in Azure in the Azure:
Activity Log panel.
3. When the creation is complete, the following Azure resources are created in your subscription. The
resources are named based on your function app name:
A resource group, which is a logical container for related resources.
A standard Azure Storage account, which maintains state and other information about your projects.
A function app, which provides the environment for executing your function code. A function app lets
you group functions as a logical unit for easier management, deployment, and sharing of resources
within the same hosting plan.
An App Service plan, which defines the underlying host for your function app.
An Application Insights instance connected to the function app, which tracks usage of your functions
in the app.
A notification is displayed after your function app is created and the deployment package is applied.
TIP
By default, the Azure resources required by your function app are created based on the function app name you
provide. By default, they're also created in the same new resource group with the function app. If you want to
either customize the names of these resources or reuse existing resources, you need to publish the project with
advanced create options instead.
1. Choose the Azure icon in the Activity bar, then in the Workspace area, select your project folder and
select the Deploy... button.
2. Select Deploy to Function App..., choose the function app you just created, and select Deploy .
3. After deployment completes, select View Output to view the creation and deployment results, including
the Azure resources that you created. If you miss the notification, select the bell icon in the lower right
corner to see it again.
Clean up resources
When you continue to the next step and add an Azure Storage queue binding to your function, you'll need to
keep all your resources in place to build on what you've already done.
Otherwise, you can use the following steps to delete the function app and its related resources to avoid
incurring any further costs.
1. In Visual Studio Code, press F1 to open the command palette. In the command palette, search for and
select Azure: Open in portal .
2. Choose your function app and press Enter. The function app page opens in the Azure portal.
3. In the Over view tab, select the named link next to Resource group .
4. On the Resource group page, review the list of included resources, and verify that they're the ones you
want to delete.
5. In the Resource group page, review the list of included resources, and verify that they are the ones you
want to delete.
6. Select Delete resource group , and follow the instructions.
Deletion may take a couple of minutes. When it's done, a notification appears for a few seconds. You can
also select the bell icon at the top of the page to view the notification.
For more information about Functions costs, see Estimating Consumption plan costs.
Next steps
You have used Visual Studio Code to create a function app with a simple HTTP-triggered function. In the next
article, you expand that function by connecting to either Azure Cosmos DB or Azure Queue Storage. To learn
more about connecting to other Azure services, see Add bindings to an existing function in Azure Functions.
.NET 6
.NET 6 Isolated
This article shows you how to build and publish a Java function project to Azure Functions with the Gradle
command-line tool. When you're done, your function code runs in Azure in a serverless hosting plan and is
triggered by an HTTP request.
NOTE
If Gradle is not your prefered development tool, check out our similar tutorials for Java developers using Maven, IntelliJ
IDEA and VS Code.
Prerequisites
To develop functions using Java, you must have the following installed:
Java Developer Kit, version 8
Azure CLI
Azure Functions Core Tools version 2.6.666 or above
Gradle, version 6.8 and above
You also need an active Azure subscription. If you don't have an Azure subscription, create an Azure free account
before you begin.
IMPORTANT
The JAVA_HOME environment variable must be set to the install location of the JDK to complete this quickstart.
Open build.gradle and change the appName in the following section to a unique name to avoid domain name
conflict when deploying to Azure.
azurefunctions {
resourceGroup = 'java-functions-group'
appName = 'azure-functions-sample-demo'
pricingTier = 'Consumption'
region = 'westus'
runtime {
os = 'windows'
}
localDebug = "transport=dt_socket,server=y,suspend=n,address=5005"
}
Open the new Function.java file from the src/main/java path in a text editor and review the generated code. This
code is an HTTP triggered function that echoes the body of the request.
I ran into an issue
You see output like the following from Azure Functions Core Tools when you run the project locally:
...
Http Functions:
Trigger the function from the command line using the following cURL command in a new terminal window:
Hello, AzureFunctions
NOTE
If you set authLevel to FUNCTION or ADMIN , the function key isn't required when running locally.
az login
TIP
If your account can access multiple subscriptions, use az account set to set the default subscription for this session.
Use the following command to deploy your project to a new function app.
gradle azureFunctionsDeploy
This creates the following resources in Azure, based on the values in the build.gradle file:
Resource group. Named with the resourceGroup you supplied.
Storage account. Required by Functions. The name is generated randomly based on Storage account name
requirements.
App Service plan. Serverless Consumption plan hosting for your function app in the specified region. The
name is generated randomly.
Function app. A function app is the deployment and execution unit for your functions. The name is your
appName, appended with a randomly generated number.
The deployment also packages the project files and deploys them to the new function app using zip deployment,
with run-from-package mode enabled.
The authLevel for HTTP Trigger in sample project is ANONYMOUS , which will skip the authentication. However, if
you use other authLevel like FUNCTION or ADMIN , you need to get the function key to call the function endpoint
over HTTP. The easiest way to get the function key is from the Azure portal.
I ran into an issue
Hello, AzureFunctions
Next steps
You've created a Java functions project with an HTTP triggered function, run it on your local machine, and
deployed it to Azure. Now, extend your function by...
Adding an Azure Storage queue output binding
Create your first function with Java and Eclipse
8/2/2022 • 2 minutes to read • Edit Online
This article shows you how to create a serverless function project with the Eclipse IDE and Apache Maven, test
and debug it, then deploy it to Azure Functions.
If you don't have an Azure subscription, create an Azure free account before you begin.
IMPORTANT
The JAVA_HOME environment variable must be set to the install location of the JDK to complete this quickstart.
It's highly recommended to also install Azure Functions Core Tools, version 2, which provide a local environment
for running and debugging Azure Functions.
1. Right-click on the generated project, then choose Run As and Maven build .
2. In the Edit Configuration dialog, Enter package in the Goals , then select Run . This will build and package
the function code.
3. Once the build is complete, create another Run configuration as above, using azure-functions:run as the
goal and name. Select Run to run the function in the IDE.
Terminate the runtime in the console window when you're done testing your function. Only one function host
can be active and running locally at a time.
Debug the function in Eclipse
In your Run As configuration set up in the previous step, change azure-functions:run to
azure-functions:run -DenableDebug and run the updated configuration to start the function app in debug mode.
Select the Run menu and open Debug Configurations . Choose Remote Java Application and create a new
one. Give your configuration a name and fill in the settings. The port should be consistent with the debug port
opened by function host, which by default is 5005 . After setup, click on Debug to start debugging.
Set breakpoints and inspect objects in your function using the IDE. When finished, stop the debugger and the
running function host. Only one function host can be active and running locally at a time.
az login
Deploy your code into a new Function app using the azure-functions:deploy Maven goal in a new Run As
configuration.
When the deploy is complete, you see the URL you can use to access your Azure function app:
Next steps
Review the Java Functions developer guide for more information on developing Java functions.
Add additional functions with different triggers to your project using the azure-functions:add Maven target.
Create your first Java function in Azure using IntelliJ
8/2/2022 • 3 minutes to read • Edit Online
This article shows you how to use Java and IntelliJ to create an Azure function.
Specifically, this article shows you:
How to create an HTTP-triggered Java function in an IntelliJ IDEA project.
Steps for testing and debugging the project in the integrated development environment (IDE) on your own
computer.
Instructions for deploying the function project to Azure Functions.
Prerequisites
An Azure account with an active subscription. Create an account for free.
An Azure supported Java Development Kit (JDK) for Java, version 8 or 11
An IntelliJ IDEA Ultimate Edition or Community Edition installed
Maven 3.5.0+
Latest Function Core Tools
2. To sign in to your Azure account, open the Azure Explorer sidebar, and then click the Azure Sign In
icon in the bar on top (or from the IDEA menu, select Tools > Azure > Azure Sign in ).
3. In the Azure Sign In window, select OAuth 2.0 , and then click Sign in . For other sign-in options, see
Sign-in instructions for the Azure Toolkit for IntelliJ.
4. In the browser, sign in with your account and then go back to IntelliJ. In the Select Subscriptions dialog
box, click on the subscriptions that you want to use, then click Select .
Create your local project
To use Azure Toolkit for IntelliJ to create a local Azure Functions project, follow these steps:
1. Open IntelliJ IDEA's Welcome dialog, select New Project to open a new project wizard, then select
Azure Functions .
2. Select Http Trigger , then click Next and follow the wizard to go through all the configurations in the
following pages. Confirm your project location, then click Finish . Intellj IDEA will then open your new
project.
Run the project locally
To run the project locally, follow these steps:
1. Navigate to src/main/java/org/example/functions/HttpTriggerFunction.java to see the code generated.
Beside the line 24, you'll notice that there's a green Run button. Click it and select Run 'Functions-
azur...' . You'll see that your function app is running locally with a few logs.
2. You can try the function by accessing the displayed endpoint from browser, such as
http://localhost:7071/api/HttpExample?name=Azure .
3. The log is also displayed in your IDEA. Stop the function app by clicking the Stop button.
Debug the project locally
To debug the project locally, follow these steps:
1. Select the Debug button in the toolbar. If you don't see the toolbar, enable it by choosing View >
Appearance > Toolbar .
2. If you don't have any Function App yet, click + in the Function line. Type in the function app name and
choose proper platform. Here you can accept the default value. Click OK and the new function app you
created will be automatically selected. Click Run to deploy your functions.
3. Right click on your HttpTrigger-Java function app, then select Trigger Function in Browser . You'll see
that the browser is opened with the trigger URL.
2. Fill in the class name HttpTest and select HttpTrigger in the create function class wizard, then click OK
to create. In this way, you can create new functions as you want.
Cleaning up functions
Select one of your function apps using Azure Explorer in your IDEA, then right-click and select Delete . This
command might take several minutes to run. When it's done, the status will refresh in Azure Explorer .
Next steps
You've created a Java project with an HTTP triggered function, run it on your local machine, and deployed it to
Azure. Now, extend your function by continuing to the following article:
Adding an Azure Storage queue output binding
Quickstart: Create your first function with Kotlin and
Maven
8/2/2022 • 5 minutes to read • Edit Online
This article guides you through using the Maven command-line tool to build and publish a Kotlin function
project to Azure Functions. When you're done, your function code runs on the Consumption Plan in Azure and
can be triggered using an HTTP request.
If you don't have an Azure subscription, create an Azure free account before you begin.
Prerequisites
To develop functions using Kotlin, you must have the following installed:
Java Developer Kit, version 8
Apache Maven, version 3.0 or above
Azure CLI
Azure Functions Core Tools version 2.6.666 or above
IMPORTANT
The JAVA_HOME environment variable must be set to the install location of the JDK to complete this quickstart.
mvn archetype:generate \
-DarchetypeGroupId=com.microsoft.azure \
-DarchetypeArtifactId=azure-functions-kotlin-archetype
NOTE
If you're experiencing issues with running the command, take a look at what maven-archetype-plugin version is used.
Because you are running the command in an empty directory with no .pom file, it might be attempting to use a plugin
of the older version from ~/.m2/repository/org/apache/maven/plugins/maven-archetype-plugin if you upgraded
your Maven from an older version. If so, try deleting the maven-archetype-plugin directory and re-running the
command.
Maven asks you for values needed to finish generating the project. For groupId, artifactId, and version values,
see the Maven naming conventions reference. The appName value must be unique across Azure, so Maven
generates an app name based on the previously entered artifactId as a default. The packageName value
determines the Kotlin package for the generated function code.
The com.fabrikam.functions and fabrikam-functions identifiers below are used as an example and to make
later steps in this quickstart easier to read. You're encouraged to supply your own values to Maven in this step.
Maven creates the project files in a new folder with a name of artifactId, in this example fabrikam-functions . The
ready to run generated code in the project is a simple HTTP triggered function that echoes the body of the
request:
class Function {
/**
* This function listens at endpoint "/api/HttpTrigger-Java". Two ways to invoke it using "curl" command
in bash:
* 1. curl -d "HTTP Body" {your host}/api/HttpTrigger-Java&code={your function key}
* 2. curl "{your host}/api/HttpTrigger-Java?name=HTTP%20Query&code={your function key}"
* Function Key is not needed when running locally, it is used to invoke function deployed to Azure.
* More details: https://aka.ms/functions_authorization_keys
*/
@FunctionName("HttpTrigger-Java")
fun run(
@HttpTrigger(
name = "req",
methods = [HttpMethod.GET, HttpMethod.POST],
authLevel = AuthorizationLevel.FUNCTION) request: HttpRequestMessage<Optional<String>>,
context: ExecutionContext): HttpResponseMessage {
name?.let {
return request
.createResponseBuilder(HttpStatus.OK)
.body("Hello, $name!")
.build()
}
return request
.createResponseBuilder(HttpStatus.BAD_REQUEST)
.body("Please pass a name on the query string or in the request body")
.build()
}
}
NOTE
If you're experiencing this exception: javax.xml.bind.JAXBException with Java 9, see the workaround on GitHub.
You see this output when the function is running locally on your system and ready to respond to HTTP requests:
Http Functions:
Trigger the function from the command line using curl in a new terminal window:
Hello LocalFunction!
az login
Deploy your code into a new function app using the azure-functions:deploy Maven target.
NOTE
When you use Visual Studio Code to deploy your function app, remember to choose a non-free subscription, or you will
get an error. You can watch your subscription on the left side of the IDE.
mvn azure-functions:deploy
When the deploy is complete, you see the URL you can use to access your function app:
NOTE
Make sure you set the Access rights to Anonymous . When you choose the default level of Function , you are required
to present the function key in requests to access your function endpoint.
Hello AzureFunctions!
return request
.createResponseBuilder(HttpStatus.OK)
.body("Hello, $name!")
.build()
return request
.createResponseBuilder(HttpStatus.OK)
.body("Hi, $name!")
.build()
Save the changes and redeploy by running azure-functions:deploy from the terminal as before. The function
app will be updated and this request:
Hi, AzureFunctionsTest
Reference bindings
To work with Azure Functions triggers and bindings other than HTTP trigger and Timer trigger, you need to
install binding extensions. While not required by this article, you'll need to know how to do enable extensions
when working with other binding types.
The easiest way to install binding extensions is to enable extension bundles. When you enable bundles, a
predefined set of extension packages is automatically installed.
To enable extension bundles, open the host.json file and update its contents to match the following code:
{
"version": "2.0",
"extensionBundle": {
"id": "Microsoft.Azure.Functions.ExtensionBundle",
"version": "[2.*, 3.0.0)"
}
}
Next steps
You've created a Kotlin function app with a simple HTTP trigger and deployed it to Azure Functions.
Review the Azure Functions Java developer guide for more information on developing Java and Kotlin
functions.
Add additional functions with different triggers to your project using the azure-functions:add Maven target.
Write and debug functions locally with Visual Studio Code, IntelliJ, and Eclipse.
Debug functions deployed in Azure with Visual Studio Code. See the Visual Studio Code serverless Java
applications documentation for instructions.
Create your first Kotlin function in Azure using
IntelliJ
8/2/2022 • 3 minutes to read • Edit Online
This article shows you how to create an HTTP-triggered Java function in an IntelliJ IDEA project, run and debug
the project in the integrated development environment (IDE), and finally deploy the function project to a
function app in Azure.
If you don't have an Azure subscription, create an Azure free account before you begin.
IMPORTANT
The JAVA_HOME environment variable must be set to the install location of the JDK to complete the steps in this article.
This command causes the function host to open a debug port at 5005.
2. On the Run menu, select Edit Configurations .
3. Select (+) to add a Remote .
4. Complete the Name and Settings fields, and then select OK to save the configuration.
5. After setup, select Debug < Remote Configuration Name > or press Shift+F9 on your keyboard to
start debugging.
6. When you're finished, stop the debugger and the running process. Only one function host can be active
and running locally at a time.
az login
2. Deploy your code into a new function app by using the azure-functions:deploy Maven target. You can
also select the azure-functions:deploy option in the Maven Projects window.
mvn azure-functions:deploy
3. Find the URL for your HTTP trigger function in the Azure CLI output after the function app has been
successfully deployed.
Next steps
Now that you have deployed your first Kotlin function app to Azure, review the Azure Functions Java developer
guide for more information on developing Java and Kotlin functions.
Add additional function apps with different triggers to your project by using the azure-functions:add Maven
target.
Create a function app on Linux in an Azure App
Service plan
8/2/2022 • 6 minutes to read • Edit Online
Azure Functions lets you host your functions on Linux in a default Azure App Service container. This article walks
you through how to use the Azure portal to create a Linux-hosted function app that runs in an App Service plan.
You can also bring your own custom container.
NOTE
In-portal editing is only supported for JavaScript, PowerShell, TypeScript, and C# Script functions.
For C# class library, Java, and Python functions, you can create the function app in the portal, but you must also create
the functions locally and then publish them to Azure.
If you don't have an Azure subscription, create an Azure free account before you begin.
Sign in to Azure
Sign in to the Azure portal using your Azure account.
Function App name Globally unique name Name that identifies your new
function app. Valid characters are
a-z (case insensitive), 0-9 , and
- .
Even after your function app is available, it may take a few minutes to be fully initialized.
Next, you create a function in the new function app.
NOTE
The portal development experience can be useful for trying out Azure Functions. For most scenarios, consider developing
your functions locally and publishing the project to your function app using either Visual Studio Code or the Azure
Functions Core Tools.
1. From the left menu of the Functions window, select Functions , then select Add from the top menu.
2. From the New Function window, select Http trigger .
3. In the New Function window, accept the default name for New Function , or enter a new name.
4. Choose Anonymous from the Authorization level drop-down list, and then select Create Function .
Azure creates the HTTP trigger function. Now, you can run the new function by sending an HTTP request.
3. Paste the function URL into your browser's address bar. Add the query string value ?name=<your_name> to
the end of this URL and press Enter to run the request.
The following example shows the response in the browser:
The request URL includes a key that is required, by default, to access your function over HTTP.
4. When your function runs, trace information is written to the logs. To see the trace output, return to the
Code + Test page in the portal and expand the Logs arrow at the bottom of the page.
Clean up resources
Other quickstarts in this collection build upon this quickstart. If you plan to work with subsequent quickstarts,
tutorials, or with any of the services you've created in this quickstart, don't clean up the resources.
Resources in Azure refer to function apps, functions, storage accounts, and so forth. They're grouped into
resource groups, and you can delete everything in a group by deleting the group.
You've created resources to complete these quickstarts. You might be billed for these resources, depending on
your account status and service pricing. If you don't need the resources anymore, here's how to delete them:
1. In the Azure portal, go to the Resource group page.
To get to that page from the function app page, select the Over view tab, and then select the link under
Resource group .
To get to that page from the dashboard, select Resource groups , and then select the resource group
that you used for this article.
2. In the Resource group page, review the list of included resources, and verify that they're the ones you
want to delete.
3. Select Delete resource group and follow the instructions.
Deletion might take a couple of minutes. When it's done, a notification appears for a few seconds. You can
also select the bell icon at the top of the page to view the notification.
Next steps
You have created a function app with a simple HTTP trigger function.
Now that you've created your first function, let's add an output binding to the function that writes a message to
a Storage queue.
Add messages to an Azure Storage queue using Functions
For more information, see Azure Functions HTTP bindings.
Quickstart: Create a Python function in Azure from
the command line
8/2/2022 • 9 minutes to read • Edit Online
In this article, you use command-line tools to create a Python function that responds to HTTP requests. After
testing the code locally, you deploy it to the serverless environment of Azure Functions.
Completing this quickstart incurs a small cost of a few USD cents or less in your Azure account.
There's also a Visual Studio Code-based version of this article.
In a terminal or command window, run func --version to check that the Azure Functions Core Tools
version is 4.x.
Run az --version to check that the Azure CLI version is 2.4 or later.
Run az login to sign in to Azure and verify an active subscription.
Run python --version (Linux/macOS) or py --version (Windows) to check your Python version reports
3.9.x, 3.8.x, or 3.7.x.
source .venv/bin/activate
If Python didn't install the venv package on your Linux distribution, run the following command:
cd LocalFunctionProj
This folder contains various files for the project, including configuration files named
[local.settings.json](functions-develop-local.md#local-settings-file) and [host.json](functions-host-
json.md). Because local.settings.json can contain secrets downloaded from Azure, the file is excluded from
source control by default in the .gitignore file.
3. Add a function to your project by using the following command, where the --name argument is the
unique name of your function (HttpExample) and the --template argument specifies the function's
trigger (HTTP).
func new creates a subfolder matching the function name that contains a code file appropriate to the
project's chosen language and a configuration file named function.json.
Get the list of templates by using the following command:
name = req.params.get('name')
if not name:
try:
req_body = req.get_json()
except ValueError:
pass
else:
name = req_body.get('name')
if name:
return func.HttpResponse(f"Hello, {name}. This HTTP triggered function executed successfully.")
else:
return func.HttpResponse(
"This HTTP triggered function executed successfully. Pass a name in the query string or in the
request body for a personalized response.",
status_code=200
)
For an HTTP trigger, the function receives request data in the variable req as defined in function.json. req is an
instance of the azure.functions.HttpRequest class. The return object, defined as $return in function.json, is an
instance of azure.functions.HttpResponse class. For more information, see Azure Functions HTTP triggers and
bindings.
function.json
function.json is a configuration file that defines the input and output bindings for the function, including the
trigger type.
If desired, you can change scriptFile to invoke a different Python file.
{
"scriptFile": "__init__.py",
"bindings": [
{
"authLevel": "function",
"type": "httpTrigger",
"direction": "in",
"name": "req",
"methods": [
"get",
"post"
]
},
{
"type": "http",
"direction": "out",
"name": "$return"
}
]
}
Each binding requires a direction, a type, and a unique name. The HTTP trigger has an input binding of type
httpTrigger and output binding of type http .
Run the function locally
1. Run your function by starting the local Azure Functions runtime host from the LocalFunctionProj folder.
func start
Toward the end of the output, the following lines must appear:
...
Http Functions:
NOTE
If HttpExample doesn't appear as shown above, you likely started the host from outside the root folder of the
project. In that case, use Ctrl+C to stop the host, go to the project's root folder, and run the previous command
again.
2. Copy the URL of your HttpExample function from this output to a browser and append the query string
?name=<YOUR_NAME> , making the full URL like http://localhost:7071/api/HttpExample?name=Functions . The
browser should display a response message that echoes back your query string value. The terminal in
which you started your project also shows log output as you make requests.
3. When you're done, press Ctrl + C and type y to stop the functions host.
az login
az config param-persist on
Azure CLI
Azure PowerShell
The az group create command creates a resource group. In the above command, replace <REGION> with a
region near you, using an available region code returned from the az account list-locations command.
NOTE
You can't host Linux and Windows apps in the same resource group. If you have an existing resource group
named AzureFunctionsQuickstart-rg with a Windows function app or web app, you must use a different
resource group.
Azure CLI
Azure PowerShell
Azure CLI
Azure PowerShell
The az functionapp create command creates the function app in Azure. If you're using Python 3.8, 3.7, or
3.6, change --runtime-version to 3.8 , 3.7 , or 3.6 , respectively. You must supply --os-type linux
because Python functions can't run on Windows, which is the default.
In the previous example, replace <APP_NAME> with a globally unique name appropriate to you. The
<APP_NAME> is also the default DNS domain for the function app.
This command creates a function app running in your specified language runtime under the Azure
Functions Consumption Plan, which is free for the amount of usage you incur here. The command also
provisions an associated Azure Application Insights instance in the same resource group, with which you
can monitor your function app and view logs. For more information, see Monitor Azure Functions. The
instance incurs no costs until you activate it.
The publish command shows results similar to the following output (truncated for simplicity):
...
...
Deployment successful.
Remote build succeeded!
Syncing triggers...
Functions in msdocs-azurefunctions-qs:
HttpExample - [httpTrigger]
Invoke url: https://msdocs-azurefunctions-qs.azurewebsites.net/api/httpexample
Browser
curl
Copy the complete Invoke URL shown in the output of the publish command into a browser address bar,
appending the query parameter ?name=Functions . The browser should display similar output as when you ran
the function locally.
Run the following command to view near real-time streaming logs in Application Insights in the Azure portal.
In a separate terminal window or in the browser, call the remote function again. A verbose log of the function
execution in Azure is shown in the terminal.
Clean up resources
If you continue to the next step and add an Azure Storage queue output binding, keep all your resources in place
as you'll build on what you've already done.
Otherwise, use the following command to delete the resource group and all its contained resources to avoid
incurring further costs.
Azure CLI
Azure PowerShell
Next steps
Connect to an Azure Storage queue
Having issues? Let us know.
Create a Premium plan function app in the Azure
portal
8/2/2022 • 3 minutes to read • Edit Online
Azure Functions offers a scalable Premium plan that provides virtual network connectivity, no cold start, and
premium hardware. To learn more, see Azure Functions Premium plan.
In this article, you learn how to use the Azure portal to create a function app in a Premium plan.
Sign in to Azure
Sign in to the Azure portal with your Azure account.
Function App name Globally unique name Name that identifies your new
function app. Valid characters are
a-z (case insensitive), 0-9 , and
- .
4. Select Next: Hosting . On the Hosting page, enter the following settings:
SET T IN G SUGGEST ED VA L UE DESC RIP T IO N
5. Select Next: Monitoring . On the Monitoring page, enter the following settings:
Clean up resources
Other quickstarts in this collection build upon this quickstart. If you plan to work with subsequent quickstarts,
tutorials, or with any of the services you've created in this quickstart, don't clean up the resources.
Resources in Azure refer to function apps, functions, storage accounts, and so forth. They're grouped into
resource groups, and you can delete everything in a group by deleting the group.
You've created resources to complete these quickstarts. You might be billed for these resources, depending on
your account status and service pricing. If you don't need the resources anymore, here's how to delete them:
1. In the Azure portal, go to the Resource group page.
To get to that page from the function app page, select the Over view tab, and then select the link under
Resource group .
To get to that page from the dashboard, select Resource groups , and then select the resource group
that you used for this article.
2. In the Resource group page, review the list of included resources, and verify that they're the ones you
want to delete.
3. Select Delete resource group and follow the instructions.
Deletion might take a couple of minutes. When it's done, a notification appears for a few seconds. You can
also select the bell icon at the top of the page to view the notification.
Next steps
Add an HTTP triggered function
Create a function using Azure for Students Starter
8/2/2022 • 6 minutes to read • Edit Online
In this tutorial, we'll create a "hello world" HTTP function in an Azure for Students Starter subscription. We'll also
walk through what's available in Azure Functions in this subscription type.
Microsoft Azure for Students Starter gets you started with the Azure products you need to develop in the cloud
at no cost to you. Learn more about this offer here.
Azure Functions lets you execute your code in a serverless environment without having to first create a VM or
publish a web application. Learn more about Functions here.
If you don't have an Azure subscription, create an Azure free account before you begin.
Create a function
In this article, learn how to use Azure Functions to create an "hello world" HTTP trigger function in the Azure
portal.
Sign in to Azure
Sign in to the Azure portal with your Azure account.
Function App name Globally unique name Name that identifies your new
function app. Valid characters are
a-z (case insensitive), 0-9 , and
- .
4. Select Next : Hosting . On the Hosting page, enter the following settings:
5. Select Next : Monitoring . On the Monitoring page, enter the following settings:
3. In the New Function window, accept the default name for New Function , or enter a new name.
4. Choose Anonymous from the Authorization level drop-down list, and then select Create Function .
Azure creates the HTTP trigger function. Now, you can run the new function by sending an HTTP request.
2. In the Get function URL dialog box, select default from the drop-down list, and then select the Copy
to clipboard icon.
3. Paste the function URL into your browser's address bar. Add the query string value ?name=<your_name> to
the end of this URL and press Enter to run the request.
The following example shows the response in the browser:
The request URL includes a key that is required, by default, to access your function over HTTP.
4. When your function runs, trace information is written to the logs. To see the trace output, return to the
Code + Test page in the portal and expand the Logs arrow at the bottom of the page.
Clean up resources
Other quickstarts in this collection build upon this quickstart. If you plan to work with subsequent quickstarts,
tutorials, or with any of the services you've created in this quickstart, don't clean up the resources.
Resources in Azure refer to function apps, functions, storage accounts, and so forth. They're grouped into
resource groups, and you can delete everything in a group by deleting the group.
You've created resources to complete these quickstarts. You might be billed for these resources, depending on
your account status and service pricing. If you don't need the resources anymore, here's how to delete them:
1. In the Azure portal, go to the Resource group page.
To get to that page from the function app page, select the Over view tab, and then select the link under
Resource group .
To get to that page from the dashboard, select Resource groups , and then select the resource group
that you used for this article.
2. In the Resource group page, review the list of included resources, and verify that they're the ones you
want to delete.
3. Select Delete resource group and follow the instructions.
Deletion might take a couple of minutes. When it's done, a notification appears for a few seconds. You can
also select the bell icon at the top of the page to view the notification.
Next steps
You've now finished creating a function app with a simple HTTP trigger function. Next, you can explore local
tooling, more languages, monitoring, and integrations.
Create your first function using Visual Studio
Create your first function using Visual Studio Code
Azure Functions JavaScript developer guide
Use Azure Functions to connect to an Azure SQL Database
Learn more about Azure Functions HTTP bindings.
Monitor your Azure Functions
Create a function triggered by Azure Cosmos DB
8/2/2022 • 10 minutes to read • Edit Online
Learn how to create a function triggered when data is added to or changed in Azure Cosmos DB. To learn more
about Azure Cosmos DB, see Azure Cosmos DB: Serverless database computing using Azure Functions.
Prerequisites
To complete this tutorial:
If you don't have an Azure subscription, create a free account before you begin.
NOTE
Azure Cosmos DB bindings are only supported for use with the SQL API. Support for Table API is provided by using the
Table storage bindings, starting with extension 5.x. For all other Azure Cosmos DB APIs, you should access the database
from your function by using the static client for your API, including Azure Cosmos DB's API for MongoDB, Cassandra API,
and Gremlin API.
Sign in to Azure
Sign in to the Azure portal with your Azure account.
Location The region closest to your users Select a geographic location to host
your Azure Cosmos DB account.
Use the location that is closest to
your users to give them the fastest
access to the data.
Apply Azure Cosmos DB free tier Apply or Do not apply With Azure Cosmos DB free tier,
discount you'll get the first 1000 RU/s and 25
GB of storage for free in an account.
Learn more about free tier.
NOTE
You can have up to one free tier Azure Cosmos DB account per Azure subscription and must opt-in when creating
the account. If you do not see the option to apply the free tier discount, this means another account in the
subscription has already been enabled with free tier.
5. In the Global Distribution tab, configure the following details. You can leave the default values for this
quickstart:
SET T IN G VA L UE DESC RIP T IO N
NOTE
The following options are not available if you select Ser verless as the Capacity mode :
Apply Free Tier Discount
Geo-redundancy
Multi-region Writes
Function App name Globally unique name Name that identifies your new
function app. Valid characters are
a-z (case insensitive), 0-9 , and
- .
4. Select Next : Hosting . On the Hosting page, enter the following settings:
SET T IN G SUGGEST ED VA L UE DESC RIP T IO N
5. Select Next : Monitoring . On the Monitoring page, enter the following settings:
3. Configure the new trigger with the settings as specified in the following table:
New Function Accept the default name The name of the function.
SET T IN G SUGGEST ED VA L UE DESC RIP T IO N
Cosmos DB account connection Accept the default new name Select New , the Database
Account you created earlier, and
then OK . This action creates an
application setting for your account
connection. This setting is used by
the binding to connection to the
database.
Collection name for leases leases Name of the collection to store the
leases.
3. Choose your Azure Cosmos DB account, then select the Data Explorer .
4. Under SQL API , choose Tasks database and select New Container .
5. In Add Container , use the settings shown in the table below the image.
6. Click OK to create the Items container. It may take a short time for the container to get created.
After the container specified in the function binding exists, you can test the function by adding items to this new
container.
{
"id": "task1",
"category": "general",
"description": "some task"
}
3. Switch to the first browser tab that contains your function in the portal. Expand the function logs and
verify that the new document has triggered the function. See that the task1 document ID value is written
to the logs.
4. (Optional) Go back to your document, make a change, and click Update . Then, go back to the function
logs and verify that the update has also triggered the function.
Clean up resources
Other quickstarts in this collection build upon this quickstart. If you plan to work with subsequent quickstarts,
tutorials, or with any of the services you've created in this quickstart, don't clean up the resources.
Resources in Azure refer to function apps, functions, storage accounts, and so forth. They're grouped into
resource groups, and you can delete everything in a group by deleting the group.
You've created resources to complete these quickstarts. You might be billed for these resources, depending on
your account status and service pricing. If you don't need the resources anymore, here's how to delete them:
1. In the Azure portal, go to the Resource group page.
To get to that page from the function app page, select the Over view tab, and then select the link under
Resource group .
To get to that page from the dashboard, select Resource groups , and then select the resource group
that you used for this article.
2. In the Resource group page, review the list of included resources, and verify that they're the ones you
want to delete.
3. Select Delete resource group and follow the instructions.
Deletion might take a couple of minutes. When it's done, a notification appears for a few seconds. You can
also select the bell icon at the top of the page to view the notification.
Next steps
You have created a function that runs when a document is added or modified in your Azure Cosmos DB. For
more information about Azure Cosmos DB triggers, see Azure Cosmos DB bindings for Azure Functions.
Now that you've created your first function, let's add an output binding to the function that writes a message to
a Storage queue.
Add messages to an Azure Storage queue using Functions
Create a function in Azure that's triggered by Blob
storage
8/2/2022 • 5 minutes to read • Edit Online
Learn how to create a function triggered when files are uploaded to or updated in a Blob storage container.
Prerequisites
An Azure subscription. If you don't have one, create a free account before you begin.
Function App name Globally unique name Name that identifies your new
function app. Valid characters are
a-z (case insensitive), 0-9 , and
- .
4. Select Next : Hosting . On the Hosting page, enter the following settings:
SET T IN G SUGGEST ED VA L UE DESC RIP T IO N
5. Select Next : Monitoring . On the Monitoring page, enter the following settings:
New Function Unique in your function app Name of this blob triggered
function.
Storage account connection AzureWebJobsStorage You can use the storage account
connection already being used by
your function app, or create a new
one.
Now that you have a blob container, you can test the function by uploading a file to the container.
2. In a separate browser window, go to your resource group in the Azure portal, and select the storage
account.
3. Select Containers , and then select the samples-workitems container.
4. Select Upload , and then select the folder icon to choose a file to upload.
5. Browse to a file on your local computer, such as an image file, choose the file. Select Open and then
Upload .
6. Go back to your function logs and verify that the blob has been read.
NOTE
When your function app runs in the default Consumption plan, there may be a delay of up to several minutes
between the blob being added or updated and the function being triggered. If you need low latency in your blob
triggered functions, consider running your function app in an App Service plan.
Clean up resources
Other quickstarts in this collection build upon this quickstart. If you plan to work with subsequent quickstarts,
tutorials, or with any of the services you've created in this quickstart, don't clean up the resources.
Resources in Azure refer to function apps, functions, storage accounts, and so forth. They're grouped into
resource groups, and you can delete everything in a group by deleting the group.
You've created resources to complete these quickstarts. You might be billed for these resources, depending on
your account status and service pricing. If you don't need the resources anymore, here's how to delete them:
1. In the Azure portal, go to the Resource group page.
To get to that page from the function app page, select the Over view tab, and then select the link under
Resource group .
To get to that page from the dashboard, select Resource groups , and then select the resource group
that you used for this article.
2. In the Resource group page, review the list of included resources, and verify that they're the ones you
want to delete.
3. Select Delete resource group and follow the instructions.
Deletion might take a couple of minutes. When it's done, a notification appears for a few seconds. You can
also select the bell icon at the top of the page to view the notification.
Next steps
You have created a function that runs when a blob is added to or updated in Blob storage. For more information
about Blob storage triggers, see Azure Functions Blob storage bindings.
Now that you've created your first function, let's add an output binding to the function that writes a message to
a Storage queue.
Add messages to an Azure Storage queue using Functions
Create a function triggered by Azure Queue
storage
8/2/2022 • 5 minutes to read • Edit Online
Learn how to create a function that is triggered when messages are submitted to an Azure Storage queue.
Prerequisites
An Azure subscription. If you don't have one, create a free account before you begin.
Function App name Globally unique name Name that identifies your new
function app. Valid characters are
a-z (case insensitive), 0-9 , and
- .
4. Select Next : Hosting . On the Hosting page, enter the following settings:
SET T IN G SUGGEST ED VA L UE DESC RIP T IO N
5. Select Next : Monitoring . On the Monitoring page, enter the following settings:
Storage account connection AzureWebJobsStorage You can use the storage account
connection already being used by
your function app, or create a new
one.
Now that you have a storage queue, you can test the function by adding a message to the queue.
4. Select Add message , and type "Hello World!" in Message text . Select OK .
5. Wait for a few seconds, then go back to your function logs and verify that the new message has been
read from the queue.
6. Back in your storage queue, select Refresh and verify that the message has been processed and is no
longer in the queue.
Clean up resources
Other quickstarts in this collection build upon this quickstart. If you plan to work with subsequent quickstarts,
tutorials, or with any of the services you've created in this quickstart, don't clean up the resources.
Resources in Azure refer to function apps, functions, storage accounts, and so forth. They're grouped into
resource groups, and you can delete everything in a group by deleting the group.
You've created resources to complete these quickstarts. You might be billed for these resources, depending on
your account status and service pricing. If you don't need the resources anymore, here's how to delete them:
1. In the Azure portal, go to the Resource group page.
To get to that page from the function app page, select the Over view tab, and then select the link under
Resource group .
To get to that page from the dashboard, select Resource groups , and then select the resource group
that you used for this article.
2. In the Resource group page, review the list of included resources, and verify that they're the ones you
want to delete.
3. Select Delete resource group and follow the instructions.
Deletion might take a couple of minutes. When it's done, a notification appears for a few seconds. You can
also select the bell icon at the top of the page to view the notification.
Next steps
You have created a function that runs when a message is added to a storage queue. For more information about
Queue storage triggers, see Azure Functions Storage queue bindings.
Now that you have a created your first function, let's add an output binding to the function that writes a
message back to another queue.
Add messages to an Azure Storage queue using Functions
Create a function in the Azure portal that runs on a
schedule
8/2/2022 • 4 minutes to read • Edit Online
Learn how to use the Azure portal to create a function that runs serverless on Azure based on a schedule that
you define.
Prerequisites
To complete this tutorial:
Ensure that you have an Azure subscription. If you don't have an Azure subscription, create a free account before
you begin.
Function App name Globally unique name Name that identifies your new
function app. Valid characters are
a-z (case insensitive), 0-9 , and
- .
4. Select Next : Hosting . On the Hosting page, enter the following settings:
5. Select Next : Monitoring . On the Monitoring page, enter the following settings:
Your new function app is ready to use. Next, you'll create a function in the new function app.
3. Configure the new trigger with the settings as specified in the table below the image, and then select
Create .
Now, you change the function's schedule so that it runs once every hour instead of every minute.
You now have a function that runs once every hour, on the hour.
Clean up resources
Other quickstarts in this collection build upon this quickstart. If you plan to work with subsequent quickstarts,
tutorials, or with any of the services you've created in this quickstart, don't clean up the resources.
Resources in Azure refer to function apps, functions, storage accounts, and so forth. They're grouped into
resource groups, and you can delete everything in a group by deleting the group.
You've created resources to complete these quickstarts. You might be billed for these resources, depending on
your account status and service pricing. If you don't need the resources anymore, here's how to delete them:
1. In the Azure portal, go to the Resource group page.
To get to that page from the function app page, select the Over view tab, and then select the link under
Resource group .
To get to that page from the dashboard, select Resource groups , and then select the resource group
that you used for this article.
2. In the Resource group page, review the list of included resources, and verify that they're the ones you
want to delete.
3. Select Delete resource group and follow the instructions.
Deletion might take a couple of minutes. When it's done, a notification appears for a few seconds. You can
also select the bell icon at the top of the page to view the notification.
Next steps
You've created a function that runs based on a schedule. For more information about timer triggers, see
Schedule code execution with Azure Functions.
Now that you've created your first function, let's add an output binding to the function that writes a message to
a Storage queue.
Add messages to an Azure Storage queue using Functions
Connect functions to Azure services using bindings
8/2/2022 • 3 minutes to read • Edit Online
When you create a function, language-specific trigger code is added in your project from a set of trigger
templates. If you want to connect your function to other services by using input or output bindings, you have to
add specific binding definitions in your function. To learn more about bindings, see Azure Functions triggers and
bindings concepts.
Local development
When you develop functions locally, you need to update the function code to add bindings. Using Visual Studio
Code can make it easier to add bindings to a function.
Visual Studio Code
When you use Visual Studio Code to develop your function and your function uses a function.json file, the Azure
Functions extension can automatically add a binding to an existing function.json file. To learn more, see Add
input and output bindings.
Manually add bindings based on examples
When adding a binding to an existing function, you'll need update both the function code and the function.json
configuration file, if used by your language. Both .NET class library and Java functions use attributes instead of
function.json, so you'll need to update that instead.
Use the following table to find examples of specific binding types that you can use to guide you in updating an
existing function. First, choose the language tab that corresponds to your project.
C#
Java
JavaScript
PowerShell
Python
SERVIC E EXA M P L ES SA M P L ES
RabbitMQ Trigger
Output
SendGrid Output
SignalR Trigger
Input
Output
Azure portal
When you develop your functions in the Azure portal, you add input and output bindings in the Integrate tab
for a given function. The new bindings are added to either the function.json file or to the method attributes,
depending on your language. The following articles show examples of how to add bindings to an existing
function in the portal:
Queue storage output binding
Azure Cosmos DB output binding
Next steps
Azure Functions triggers and bindings concepts
Store unstructured data using Azure Functions and
Azure Cosmos DB
8/2/2022 • 6 minutes to read • Edit Online
Azure Cosmos DB is a great way to store unstructured and JSON data. Combined with Azure Functions, Cosmos
DB makes storing data quick and easy with much less code than required for storing data in a relational
database.
NOTE
At this time, the Azure Cosmos DB trigger, input bindings, and output bindings work with SQL API and Graph API
accounts only.
In Azure Functions, input and output bindings provide a declarative way to connect to external service data from
your function. In this article, learn how to update an existing function to add an output binding that stores
unstructured data in an Azure Cosmos DB document.
Prerequisites
To complete this tutorial:
This topic uses as its starting point the resources created in Create your first function from the Azure portal. If
you haven't already done so, please complete these steps now to create your function app.
Location The region closest to your users Select a geographic location to host
your Azure Cosmos DB account.
Use the location that is closest to
your users to give them the fastest
access to the data.
Apply Azure Cosmos DB free tier Apply or Do not apply With Azure Cosmos DB free tier,
discount you'll get the first 1000 RU/s and 25
GB of storage for free in an account.
Learn more about free tier.
NOTE
You can have up to one free tier Azure Cosmos DB account per Azure subscription and must opt-in when creating
the account. If you do not see the option to apply the free tier discount, this means another account in the
subscription has already been enabled with free tier.
5. In the Global Distribution tab, configure the following details. You can leave the default values for this
quickstart:
SET T IN G VA L UE DESC RIP T IO N
NOTE
The following options are not available if you select Ser verless as the Capacity mode :
Apply Free Tier Discount
Geo-redundancy
Multi-region Writes
If true, creates the Cosmos DB Yes The collection doesn't already exist,
database and collection so create it.
Cosmos DB account connection New setting Select New , then choose Azure
Cosmos DB Account and the
Database account you created
earlier, and then select OK . Creates
an application setting for your
account connection. This setting is
used by the binding to connection
to the database.
C#
JavaScript
using Microsoft.AspNetCore.Mvc;
using Microsoft.AspNetCore.Http;
using Microsoft.Extensions.Logging;
public static IActionResult Run(HttpRequest req, out object taskDocument, ILogger log)
{
string name = req.Query["name"];
string task = req.Query["task"];
string duedate = req.Query["duedate"];
This code sample reads the HTTP Request query strings and assigns them to fields in the taskDocument object.
The taskDocument binding sends the object data from this binding parameter to be stored in the bound
document database. The database is created the first time the function runs.
You've successfully added a binding to your HTTP trigger to store unstructured data in an Azure Cosmos DB.
Clean up resources
In the preceding steps, you created Azure resources in a resource group. If you don't expect to need these
resources in the future, you can delete them by deleting the resource group.
From the Azure portal menu or Home page, select Resource groups . Then, on the Resource groups page,
select myResourceGroup .
On the myResourceGroup page, make sure that the listed resources are the ones you want to delete.
Select Delete resource group , type myResourceGroup in the text box to confirm, and then select Delete .
Next steps
For more information about binding to a Cosmos DB database, see Azure Functions Cosmos DB bindings.
Azure Functions triggers and bindings concepts
Learn how Functions integrates with other services.
Azure Functions developer reference
Provides more technical information about the Functions runtime and a reference for coding functions and
defining triggers and bindings.
Code and test Azure Functions locally
Describes the options for developing your functions locally.
Add messages to an Azure Storage queue using
Functions
8/2/2022 • 5 minutes to read • Edit Online
In Azure Functions, input and output bindings provide a declarative way to make data from external services
available to your code. In this quickstart, you use an output binding to create a message in a queue when a
function is triggered by an HTTP request. You use Azure storage container to view the queue messages that your
function creates.
Prerequisites
To complete this quickstart:
An Azure subscription. If you don't have one, create a free account before you begin.
Follow the directions in Create your first function from the Azure portal and don't do the Clean up
resources step. That quickstart creates the function app and function that you use here.
4. Select the Azure Queue Storage binding type, and add the settings as specified in the table that follows
this screenshot:
SET T IN G SUGGEST ED VA L UE DESC RIP T IO N
Storage account connection AzureWebJobsStorage You can use the storage account
connection already being used by
your function app, or create a new
one.
Add an outputQueueItem parameter to the method signature as shown in the following example.
In the body of the function just before the return statement, add code that uses the parameter to create
a queue message.
Notice that the Request body contains the name value Azure. This value appears in the queue message
that is created when the function is invoked.
As an alternative to selecting Run here, you can call the function by entering a URL in a browser and
specifying the name value in the query string. The browser method is shown in the previous quickstart.
3. Check the logs to make sure that the function succeeded.
A new queue named outqueue is created in your Storage account by the Functions runtime when the output
binding is first used. You'll use storage account to verify that the queue and a message in it were created.
Find the storage account connected to AzureWebJobsStorage
1. Go to your function app and select Configuration .
2. Under Application settings , select AzureWebJobsStorage .
Clean up resources
Other quickstarts in this collection build upon this quickstart. If you plan to work with subsequent quickstarts,
tutorials, or with any of the services you've created in this quickstart, don't clean up the resources.
Resources in Azure refer to function apps, functions, storage accounts, and so forth. They're grouped into
resource groups, and you can delete everything in a group by deleting the group.
You've created resources to complete these quickstarts. You might be billed for these resources, depending on
your account status and service pricing. If you don't need the resources anymore, here's how to delete them:
1. In the Azure portal, go to the Resource group page.
To get to that page from the function app page, select the Over view tab, and then select the link under
Resource group .
To get to that page from the dashboard, select Resource groups , and then select the resource group
that you used for this article.
2. In the Resource group page, review the list of included resources, and verify that they're the ones you
want to delete.
3. Select Delete resource group and follow the instructions.
Deletion might take a couple of minutes. When it's done, a notification appears for a few seconds. You can
also select the bell icon at the top of the page to view the notification.
Next steps
In this quickstart, you added an output binding to an existing function. For more information about binding to
Queue storage, see Azure Functions Storage queue bindings.
Azure Functions triggers and bindings concepts
Learn how Functions integrates with other services.
Azure Functions developer reference
Provides more technical information about the Functions runtime and a reference for coding functions and
defining triggers and bindings.
Code and test Azure Functions locally
Describes the options for developing your functions locally.
Connect Azure Functions to Azure Storage using
Visual Studio Code
8/2/2022 • 16 minutes to read • Edit Online
Azure Functions lets you connect Azure services and other resources to functions without having to write your
own integration code. These bindings, which represent both input and output, are declared within the function
definition. Data from bindings is provided to the function as parameters. A trigger is a special type of input
binding. Although a function has only one trigger, it can have multiple input and output bindings. To learn more,
see Azure Functions triggers and bindings concepts.
In this article, you learn how to use Visual Studio Code to connect Azure Storage to the function you created in
the previous quickstart article. The output binding that you add to this function writes data from the HTTP
request to a message in an Azure Queue storage queue.
Most bindings require a stored connection string that Functions uses to access the bound service. To make it
easier, you use the storage account that you created with your function app. The connection to this account is
already stored in an app setting named AzureWebJobsStorage .
IMPORTANT
Because the local.settings.json file contains secrets, it never gets published, and is excluded from the source
control.
3. Copy the value AzureWebJobsStorage , which is the key for the storage account connection string value.
You use this connection to verify that the output binding works as expected.
{
"version": "2.0",
"extensionBundle": {
"id": "Microsoft.Azure.Functions.ExtensionBundle",
"version": "[1.*, 2.0.0)"
}
}
Now, you can add the storage output binding to your project.
Except for HTTP and timer triggers, bindings are implemented as extension packages. Run the following dotnet
add package command in the Terminal window to add the Storage extension package to your project.
In-process
Isolated process
Now, you can add the storage output binding to your project.
Select binding with direction... Azure Queue Storage The binding is an Azure Storage queue
binding.
The name used to identify this msg Name that identifies the binding
binding in your code parameter referenced in your code.
The queue to which the message outqueue The name of the queue that the
will be sent binding writes to. When the
queueName doesn't exist, the binding
creates it on first use.
A binding is added to the bindings array in your function.json, which should look like the following:
{
"type": "queue",
"direction": "out",
"name": "msg",
"queueName": "outqueue",
"connection": "AzureWebJobsStorage"
}
In a C# project, the bindings are defined as binding attributes on the function method. Specific definitions
depend on whether your app runs in-process (C# class library) or in an isolated process.
In-process
Isolated process
Open the HttpExample.cs project file and add the following parameter to the Run method definition:
The msg parameter is an ICollector<T> type, representing a collection of messages written to an output
binding when the function completes. In this case, the output is a storage queue named outqueue . The
StorageAccountAttribute sets the connection string for the storage account. This attribute indicates the setting
that contains the storage account connection string and can be applied at the class, method, or parameter level.
In this case, you could omit StorageAccountAttribute because you're already using the default storage account.
The Run method definition must now look like the following code:
[FunctionName("HttpExample")]
public static async Task<IActionResult> Run(
[HttpTrigger(AuthorizationLevel.Anonymous, "get", "post", Route = null)] HttpRequest req,
[Queue("outqueue"),StorageAccount("AzureWebJobsStorage")] ICollector<string> msg,
ILogger log)
In a Java project, the bindings are defined as binding annotations on the function method. The function.json file
is then autogenerated based on these annotations.
Browse to the location of your function code under src/main/java, open the Function.java project file, and add
the following parameter to the run method definition:
The msg parameter is an OutputBinding<T> type, which represents a collection of strings that are written as
messages to an output binding when the function completes. In this case, the output is a storage queue named
outqueue . The connection string for the Storage account is set by the connection method. Rather than the
connection string itself, you pass the application setting that contains the Storage account connection string.
The run method definition should now look like the following example:
@FunctionName("HttpExample")
public HttpResponseMessage run(
@HttpTrigger(name = "req", methods = {HttpMethod.GET, HttpMethod.POST}, authLevel =
AuthorizationLevel.ANONYMOUS)
HttpRequestMessage<Optional<String>> request,
@QueueOutput(name = "msg", queueName = "outqueue",
connection = "AzureWebJobsStorage") OutputBinding<String> msg,
final ExecutionContext context) {
Add code that uses the msg output binding object on context.bindings to create a queue message. Add this
code before the context.res statement.
context.bindings.msg = name;
const httpTrigger: AzureFunction = async function (context: Context, req: HttpRequest): Promise<void> {
context.log('HTTP trigger function processed a request.');
const name = (req.query.name || (req.body && req.body.name));
if (name) {
// Add a message to the storage queue,
// which is the name passed to the function.
context.bindings.msg = name;
// Send a "hello" response.
context.res = {
// status: 200, /* Defaults to 200 */
body: "Hello " + (req.query.name || req.body.name)
};
}
else {
context.res = {
status: 400,
body: "Please pass a name on the query string or in the request body"
};
}
};
Add code that uses the Push-OutputBinding cmdlet to write text to the queue using the msg output binding. Add
this code before you set the OK status in the if statement.
$outputMsg = $name
Push-OutputBinding -name msg -Value $outputMsg
if ($name) {
# Write the $name value to the queue,
# which is the name passed to the function.
$outputMsg = $name
Push-OutputBinding -name msg -Value $outputMsg
$status = [HttpStatusCode]::OK
$body = "Hello $name"
}
else {
$status = [HttpStatusCode]::BadRequest
$body = "Please pass a name on the query string or in the request body."
}
Update HttpExample\__init__.py to match the following code, add the msg parameter to the function definition
and msg.set(name) under the if name: statement:
import logging
name = req.params.get('name')
if not name:
try:
req_body = req.get_json()
except ValueError:
pass
else:
name = req_body.get('name')
if name:
msg.set(name)
return func.HttpResponse(f"Hello {name}!")
else:
return func.HttpResponse(
"Please pass a name on the query string or in the request body",
status_code=400
)
The msg parameter is an instance of the azure.functions.Out class . The set method writes a string message
to the queue. In this case, it's the name passed to the function in the URL query string.
In-process
Isolated process
Add code that uses the msg output binding object to create a queue message. Add this code before the method
returns.
if (!string.IsNullOrEmpty(name))
{
// Add a message to the output collection.
msg.Add(name);
}
[FunctionName("HttpExample")]
public static async Task<IActionResult> Run(
[HttpTrigger(AuthorizationLevel.Anonymous, "get", "post", Route = null)] HttpRequest req,
[Queue("outqueue"),StorageAccount("AzureWebJobsStorage")] ICollector<string> msg,
ILogger log)
{
log.LogInformation("C# HTTP trigger function processed a request.");
if (!string.IsNullOrEmpty(name))
{
// Add a message to the output collection.
msg.Add(name);
}
return name != null
? (ActionResult)new OkObjectResult($"Hello, {name}")
: new BadRequestObjectResult("Please pass a name on the query string or in the request body");
}
Now, you can use the new msg parameter to write to the output binding from your function code. Add the
following line of code before the success response to add the value of name to the msg output binding.
msg.setValue(name);
When you use an output binding, you don't have to use the Azure Storage SDK code for authentication, getting a
queue reference, or writing data. The Functions runtime and queue output binding do those tasks for you.
Your run method should now look like the following example:
@FunctionName("HttpExample")
public HttpResponseMessage run(
@HttpTrigger(name = "req", methods = {HttpMethod.GET, HttpMethod.POST}, authLevel =
AuthorizationLevel.ANONYMOUS)
HttpRequestMessage<Optional<String>> request,
@QueueOutput(name = "msg", queueName = "outqueue",
connection = "AzureWebJobsStorage") OutputBinding<String> msg,
final ExecutionContext context) {
context.getLogger().info("Java HTTP trigger processed a request.");
if (name == null) {
return request.createResponseBuilder(HttpStatus.BAD_REQUEST)
.body("Please pass a name on the query string or in the request body").build();
} else {
// Write the name to the message queue.
msg.setValue(name);
@SuppressWarnings("unchecked")
final OutputBinding<String> msg = (OutputBinding<String>)mock(OutputBinding.class);
final HttpResponseMessage ret = new Function().run(req, msg, context);
If you have trouble running on Windows, make sure that the default terminal for Visual Studio Code isn't
set to WSL Bash .
2. With the Core Tools running, go to the Azure: Functions area. Under Functions , expand Local Project
> Functions . Right-click (Windows) or Ctrl - click (macOS) the HttpExample function and choose
Execute Function Now....
3. In the Enter request body , press Enter to send a request message to your function.
4. When the function executes locally and returns a response, a notification is raised in Visual Studio Code.
Information about the function execution is shown in the Terminal panel.
5. Press Ctrl + C to stop Core Tools and disconnect the debugger.
2. In the Connect dialog, choose Add an Azure account , choose your Azure environment , and then
select Sign in....
After you successfully sign in to your account, you see all of the Azure subscriptions associated with your
account.
Examine the output queue
1. In Visual Studio Code, press F1 to open the command palette, then search for and run the command
Azure Storage: Open in Storage Explorer and choose your storage account name. Your storage account
opens in the Azure Storage Explorer.
2. Expand the Queues node, and then select the queue named outqueue .
The queue contains the message that the queue output binding created when you ran the HTTP-triggered
function. If you invoked the function with the default name value of Azure, the queue message is Name
passed to the function: Azure.
3. Run the function again, send another request, and you see a new message in the queue.
Now, it's time to republish the updated function app to Azure.
Clean up resources
In Azure, resources refer to function apps, functions, storage accounts, and so forth. They're grouped into
resource groups, and you can delete everything in a group by deleting the group.
You've created resources to complete these quickstarts. You may be billed for these resources, depending on
your account status and service pricing. If you don't need the resources anymore, here's how to delete them:
1. In Visual Studio Code, press F1 to open the command palette. In the command palette, search for and
select Azure: Open in portal .
2. Choose your function app and press Enter. The function app page opens in the Azure portal.
3. In the Over view tab, select the named link next to Resource group .
4. On the Resource group page, review the list of included resources, and verify that they're the ones you
want to delete.
5. In the Resource group page, review the list of included resources, and verify that they are the ones you
want to delete.
6. Select Delete resource group , and follow the instructions.
Deletion may take a couple of minutes. When it's done, a notification appears for a few seconds. You can
also select the bell icon at the top of the page to view the notification.
Next steps
You've updated your HTTP triggered function to write data to a Storage queue. Now you can learn more about
developing Functions using Visual Studio Code:
Develop Azure Functions using Visual Studio Code
Azure Functions triggers and bindings.
Examples of complete Function projects in C#.
Azure Functions C# developer reference
Examples of complete Function projects in JavaScript.
Azure Functions JavaScript developer guide
Examples of complete Function projects in Java.
Azure Functions Java developer guide
Examples of complete Function projects in TypeScript.
Azure Functions TypeScript developer guide
Examples of complete Function projects in Python.
Azure Functions Python developer guide
Examples of complete Function projects in PowerShell.
Azure Functions PowerShell developer guide
Connect functions to Azure Storage using Visual
Studio
8/2/2022 • 7 minutes to read • Edit Online
Azure Functions lets you connect Azure services and other resources to functions without having to write your
own integration code. These bindings, which represent both input and output, are declared within the function
definition. Data from bindings is provided to the function as parameters. A trigger is a special type of input
binding. Although a function has only one trigger, it can have multiple input and output bindings. To learn more,
see Azure Functions triggers and bindings concepts.
This article shows you how to use Visual Studio to connect the function you created in the previous quickstart
article to Azure Storage. The output binding that you add to this function writes data from the HTTP request to a
message in an Azure Queue storage queue.
Most bindings require a stored connection string that Functions uses to access the bound service. To make it
easier, you use the Storage account that you created with your function app. The connection to this account is
already stored in an app setting named AzureWebJobsStorage .
Prerequisites
Before you start this article, you must:
Complete part 1 of the Visual Studio quickstart.
Sign in to your Azure subscription from Visual Studio.
Install-Package Microsoft.Azure.WebJobs.Extensions.Storage
Now, you can add the storage output binding to your project.
Open the HttpExample.cs project file and add the following parameter to the Run method definition:
[FunctionName("HttpExample")]
public static async Task<IActionResult> Run(
[HttpTrigger(AuthorizationLevel.Anonymous, "get", "post", Route = null)] HttpRequest req,
[Queue("outqueue"),StorageAccount("AzureWebJobsStorage")] ICollector<string> msg,
ILogger log)
In-process
Isolated process
Add code that uses the msg output binding object to create a queue message. Add this code before the method
returns.
if (!string.IsNullOrEmpty(name))
{
// Add a message to the output collection.
msg.Add(name);
}
[FunctionName("HttpExample")]
public static async Task<IActionResult> Run(
[HttpTrigger(AuthorizationLevel.Anonymous, "get", "post", Route = null)] HttpRequest req,
[Queue("outqueue"),StorageAccount("AzureWebJobsStorage")] ICollector<string> msg,
ILogger log)
{
log.LogInformation("C# HTTP trigger function processed a request.");
if (!string.IsNullOrEmpty(name))
{
// Add a message to the output collection.
msg.Add(name);
}
return name != null
? (ActionResult)new OkObjectResult($"Hello, {name}")
: new BadRequestObjectResult("Please pass a name on the query string or in the request body");
}
Run the function locally
1. To run your function, press F5 in Visual Studio. You might need to enable a firewall exception so that the
tools can handle HTTP requests. Authorization levels are never enforced when you run a function locally.
2. Copy the URL of your function from the Azure Functions runtime output.
3. Paste the URL for the HTTP request into your browser's address bar. Append the query string
?name=<YOUR_NAME> to this URL and run the request. The following image shows the response in the
browser to the local GET request returned by the function:
3. Expand the Queues node, and then double-click the queue named outqueue to view the contents of the
queue in Visual Studio.
The queue contains the message that the queue output binding created when you ran the HTTP-triggered
function. If you invoked the function with the default name value of Azure, the queue message is Name
passed to the function: Azure.
4. Run the function again, send another request, and you'll see a new message appear in the queue.
Now, it's time to republish the updated function app to Azure.
Clean up resources
Other quickstarts in this collection build upon this quickstart. If you plan to work with subsequent quickstarts,
tutorials, or with any of the services you've created in this quickstart, don't clean up the resources.
Resources in Azure refer to function apps, functions, storage accounts, and so forth. They're grouped into
resource groups, and you can delete everything in a group by deleting the group.
You've created resources to complete these quickstarts. You might be billed for these resources, depending on
your account status and service pricing. If you don't need the resources anymore, here's how to delete them:
1. In the Azure portal, go to the Resource group page.
To get to that page from the function app page, select the Over view tab, and then select the link under
Resource group .
To get to that page from the dashboard, select Resource groups , and then select the resource group
that you used for this article.
2. In the Resource group page, review the list of included resources, and verify that they're the ones you
want to delete.
3. Select Delete resource group and follow the instructions.
Deletion might take a couple of minutes. When it's done, a notification appears for a few seconds. You can
also select the bell icon at the top of the page to view the notification.
Next steps
You've updated your HTTP triggered function to write data to a Storage queue. To learn more about developing
Functions, see Develop Azure Functions using Visual Studio.
Next, you should enable Application Insights monitoring for your function app:
Enable Application Insights integration
Connect your Java function to Azure Storage
8/2/2022 • 6 minutes to read • Edit Online
Azure Functions lets you connect Azure services and other resources to functions without having to write your
own integration code. These bindings, which represent both input and output, are declared within the function
definition. Data from bindings is provided to the function as parameters. A trigger is a special type of input
binding. Although a function has only one trigger, it can have multiple input and output bindings. To learn more,
see Azure Functions triggers and bindings concepts.
This article shows you how to integrate the function you created in the previous quickstart article with an Azure
Storage queue. The output binding that you add to this function writes data from an HTTP request to a message
in the queue.
Most bindings require a stored connection string that Functions uses to access the bound service. To make this
connection easier, you use the Storage account that you created with your function app. The connection to this
account is already stored in an app setting named AzureWebJobsStorage .
Prerequisites
Before you start this article, complete the steps in part 1 of the Java quickstart.
IMPORTANT
This command overwrites any existing settings with values from your function app in Azure.
Because it contains secrets, the local.settings.json file never gets published, and it should be excluded from source control.
You need the value AzureWebJobsStorage , which is the Storage account connection string. You use this
connection to verify that the output binding works as expected.
You can now add the Storage output binding to your project.
The msg parameter is an OutputBinding<T> type, which represents a collection of strings. These strings are
written as messages to an output binding when the function completes. In this case, the output is a storage
queue named outqueue . The connection string for the Storage account is set by the connection method. You
pass the application setting that contains the Storage account connection string, rather than passing the
connection string itself.
The run method definition must now look like the following example:
@FunctionName("HttpTrigger-Java")
public HttpResponseMessage run(
@HttpTrigger(name = "req", methods = {HttpMethod.GET, HttpMethod.POST}, authLevel =
AuthorizationLevel.FUNCTION)
HttpRequestMessage<Optional<String>> request,
@QueueOutput(name = "msg", queueName = "outqueue", connection = "AzureWebJobsStorage")
OutputBinding<String> msg, final ExecutionContext context) {
...
}
msg.setValue(name);
When you use an output binding, you don't have to use the Azure Storage SDK code for authentication, getting a
queue reference, or writing data. The Functions runtime and queue output binding do those tasks for you.
Your run method must now look like the following example:
public HttpResponseMessage run(
@HttpTrigger(name = "req", methods = {HttpMethod.GET, HttpMethod.POST}, authLevel =
AuthorizationLevel.ANONYMOUS)
HttpRequestMessage<Optional<String>> request,
@QueueOutput(name = "msg", queueName = "outqueue",
connection = "AzureWebJobsStorage") OutputBinding<String> msg,
final ExecutionContext context) {
context.getLogger().info("Java HTTP trigger processed a request.");
if (name == null) {
return request.createResponseBuilder(HttpStatus.BAD_REQUEST)
.body("Please pass a name on the query string or in the request body").build();
} else {
// Write the name to the message queue.
msg.setValue(name);
@SuppressWarnings("unchecked")
final OutputBinding<String> msg = (OutputBinding<String>)mock(OutputBinding.class);
final HttpResponseMessage ret = new Function().run(req, msg, context);
You're now ready to try out the new output binding locally.
NOTE
Because you enabled extension bundles in the host.json, the Storage binding extension was downloaded and installed for
you during startup, along with the other Microsoft binding extensions.
As before, trigger the function from the command line using cURL in a new terminal window:
curl -w "\n" http://localhost:7071/api/HttpTrigger-Java --data AzureFunctions
This time, the output binding also creates a queue named outqueue in your Storage account and adds a
message with this same string.
Next, you use the Azure CLI to view the new queue and verify that a message was added. You can also view your
queue by using the Microsoft Azure Storage Explorer or in the Azure portal.
Set the Storage account connection
Open the local.settings.json file and copy the value of AzureWebJobsStorage , which is the Storage account
connection string. Set the AZURE_STORAGE_CONNECTION_STRING environment variable to the connection string by
using this Bash command:
AZURE_STORAGE_CONNECTION_STRING="<STORAGE_CONNECTION_STRING>"
When you set the connection string in the AZURE_STORAGE_CONNECTION_STRING environment variable, you can
access your Storage account without having to provide authentication each time.
Query the Storage queue
You can use the az storage queue list command to view the Storage queues in your account, as in the
following example:
The output from this command includes a queue named outqueue , which is the queue that was created when
the function ran.
Next, use the az storage message peek command to view the messages in this queue, as in this example:
echo `echo $(az storage message peek --queue-name outqueue -o tsv --query '[].{Message:content}') | base64 -
-decode`
The string returned should be the same as the message you sent to test the function.
NOTE
The previous example decodes the returned string from base64. This is because the Queue storage bindings write to and
read from Azure Storage as base64 strings.
Maven
Gradle
mvn azure-functions:deploy
Again, you can use cURL to test the deployed function. As before, pass the value AzureFunctions in the body of
the POST request to the URL, as in this example:
curl -w "\n" https://fabrikam-functions-20190929094703749.azurewebsites.net/api/HttpTrigger-Java?
code=zYRohsTwBlZ68YF.... --data AzureFunctions
You can examine the Storage queue message again to verify that the output binding generates a new message
in the queue, as expected.
Clean up resources
Other quickstarts in this collection build upon this quickstart. If you plan to continue on with subsequent
quickstarts or with the tutorials, don't clean up the resources created in this quickstart. If you don't plan to
continue, use the following command to delete all resources created in this quickstart:
Next steps
You've updated your HTTP-triggered function to write data to a Storage queue. To learn more about developing
Azure Functions with Java, see the Azure Functions Java developer guide and Azure Functions triggers and
bindings. For examples of complete Function projects in Java, see the Java Functions samples.
Next, you should enable Application Insights monitoring for your function app:
Enable Application Insights integration
Connect Azure Functions to Azure Storage using
command line tools
8/2/2022 • 17 minutes to read • Edit Online
In this article, you integrate an Azure Storage queue with the function and storage account you created in the
previous quickstart article. You achieve this integration by using an output binding that writes data from an
HTTP request to a message in the queue. Completing this article incurs no additional costs beyond the few USD
cents of the previous quickstart. To learn more about bindings, see Azure Functions triggers and bindings
concepts.
2. Open local.settings.json file and locate the value named AzureWebJobsStorage , which is the Storage
account connection string. You use the name AzureWebJobsStorage and the connection string in other
sections of this article.
IMPORTANT
Because the local.settings.json file contains secrets downloaded from Azure, always exclude this file from source control.
The .gitignore file created with a local functions project excludes the file by default.
Now, you can add the storage output binding to your project.
"bindings": [
{
"authLevel": "function",
"type": "httpTrigger",
"direction": "in",
"name": "req",
"methods": [
"get",
"post"
]
},
{
"type": "http",
"direction": "out",
"name": "res"
}
]
"scriptFile": "__init__.py",
"bindings": [
{
"authLevel": "function",
"type": "httpTrigger",
"direction": "in",
"name": "req",
"methods": [
"get",
"post"
]
},
{
"type": "http",
"direction": "out",
"name": "$return"
}
"bindings": [
{
"authLevel": "function",
"type": "httpTrigger",
"direction": "in",
"name": "Request",
"methods": [
"get",
"post"
]
},
{
"type": "http",
"direction": "out",
"name": "Response"
}
]
Each binding has at least a type, a direction, and a name. In the above example, the first binding is of type
httpTrigger with the direction in . For the in direction, name specifies the name of an input parameter that's
sent to the function when invoked by the trigger.
The second binding in the collection is named res . This http binding is an output binding ( out ) that is used
to write the HTTP response.
To write to an Azure Storage queue from this function, add an out binding of type queue with the name msg ,
as shown in the code below:
{
"authLevel": "function",
"type": "httpTrigger",
"direction": "in",
"name": "req",
"methods": [
"get",
"post"
]
},
{
"type": "http",
"direction": "out",
"name": "res"
},
{
"type": "queue",
"direction": "out",
"name": "msg",
"queueName": "outqueue",
"connection": "AzureWebJobsStorage"
}
]
}
The second binding in the collection is of type http with the direction out , in which case the special name of
$return indicates that this binding uses the function's return value rather than providing an input parameter.
To write to an Azure Storage queue from this function, add an out binding of type queue with the name msg ,
as shown in the code below:
"bindings": [
{
"authLevel": "anonymous",
"type": "httpTrigger",
"direction": "in",
"name": "req",
"methods": [
"get",
"post"
]
},
{
"type": "http",
"direction": "out",
"name": "$return"
},
{
"type": "queue",
"direction": "out",
"name": "msg",
"queueName": "outqueue",
"connection": "AzureWebJobsStorage"
}
]
The second binding in the collection is named res . This http binding is an output binding ( out ) that is used
to write the HTTP response.
To write to an Azure Storage queue from this function, add an out binding of type queue with the name msg ,
as shown in the code below:
{
"authLevel": "function",
"type": "httpTrigger",
"direction": "in",
"name": "Request",
"methods": [
"get",
"post"
]
},
{
"type": "http",
"direction": "out",
"name": "Response"
},
{
"type": "queue",
"direction": "out",
"name": "msg",
"queueName": "outqueue",
"connection": "AzureWebJobsStorage"
}
]
}
In this case, msg is given to the function as an output argument. For a queue type, you must also specify the
name of the queue in queueName and provide the name of the Azure Storage connection (from local.settings.json
file) in connection .
In a C# project, the bindings are defined as binding attributes on the function method. Specific definitions
depend on whether your app runs in-process (C# class library) or in an isolated process.
In-process
Isolated process
Open the HttpExample.cs project file and add the following parameter to the Run method definition:
The msg parameter is an ICollector<T> type, representing a collection of messages written to an output
binding when the function completes. In this case, the output is a storage queue named outqueue . The
StorageAccountAttribute sets the connection string for the storage account. This attribute indicates the setting
that contains the storage account connection string and can be applied at the class, method, or parameter level.
In this case, you could omit StorageAccountAttribute because you're already using the default storage account.
The Run method definition must now look like the following code:
[FunctionName("HttpExample")]
public static async Task<IActionResult> Run(
[HttpTrigger(AuthorizationLevel.Anonymous, "get", "post", Route = null)] HttpRequest req,
[Queue("outqueue"),StorageAccount("AzureWebJobsStorage")] ICollector<string> msg,
ILogger log)
In a Java project, the bindings are defined as binding annotations on the function method. The function.json file
is then autogenerated based on these annotations.
Browse to the location of your function code under src/main/java, open the Function.java project file, and add
the following parameter to the run method definition:
@QueueOutput(name = "msg", queueName = "outqueue", connection = "AzureWebJobsStorage") OutputBinding<String>
msg
The msg parameter is an OutputBinding<T> type, which represents a collection of strings. These strings are
written as messages to an output binding when the function completes. In this case, the output is a storage
queue named outqueue . The connection string for the Storage account is set by the connection method. You
pass the application setting that contains the Storage account connection string, rather than passing the
connection string itself.
The run method definition must now look like the following example:
@FunctionName("HttpTrigger-Java")
public HttpResponseMessage run(
@HttpTrigger(name = "req", methods = {HttpMethod.GET, HttpMethod.POST}, authLevel =
AuthorizationLevel.FUNCTION)
HttpRequestMessage<Optional<String>> request,
@QueueOutput(name = "msg", queueName = "outqueue", connection = "AzureWebJobsStorage")
OutputBinding<String> msg, final ExecutionContext context) {
...
}
For more information on the details of bindings, see Azure Functions triggers and bindings concepts and queue
output configuration.
import logging
name = req.params.get('name')
if not name:
try:
req_body = req.get_json()
except ValueError:
pass
else:
name = req_body.get('name')
if name:
msg.set(name)
return func.HttpResponse(f"Hello {name}!")
else:
return func.HttpResponse(
"Please pass a name on the query string or in the request body",
status_code=400
)
The msg parameter is an instance of the azure.functions.Out class . The set method writes a string message
to the queue. In this case, it's the name passed to the function in the URL query string.
Add code that uses the msg output binding object on context.bindings to create a queue message. Add this
code before the context.res statement.
Add code that uses the msg output binding object on context.bindings to create a queue message. Add this
code before the context.res statement.
context.bindings.msg = name;
const httpTrigger: AzureFunction = async function (context: Context, req: HttpRequest): Promise<void> {
context.log('HTTP trigger function processed a request.');
const name = (req.query.name || (req.body && req.body.name));
if (name) {
// Add a message to the storage queue,
// which is the name passed to the function.
context.bindings.msg = name;
// Send a "hello" response.
context.res = {
// status: 200, /* Defaults to 200 */
body: "Hello " + (req.query.name || req.body.name)
};
}
else {
context.res = {
status: 400,
body: "Please pass a name on the query string or in the request body"
};
}
};
Add code that uses the Push-OutputBinding cmdlet to write text to the queue using the msg output binding. Add
this code before you set the OK status in the if statement.
$outputMsg = $name
Push-OutputBinding -name msg -Value $outputMsg
if ($name) {
# Write the $name value to the queue,
# which is the name passed to the function.
$outputMsg = $name
Push-OutputBinding -name msg -Value $outputMsg
$status = [HttpStatusCode]::OK
$body = "Hello $name"
}
else {
$status = [HttpStatusCode]::BadRequest
$body = "Please pass a name on the query string or in the request body."
}
In-process
Isolated process
Add code that uses the msg output binding object to create a queue message. Add this code before the method
returns.
if (!string.IsNullOrEmpty(name))
{
// Add a message to the output collection.
msg.Add(name);
}
if (!string.IsNullOrEmpty(name))
{
// Add a message to the output collection.
msg.Add(name);
}
return name != null
? (ActionResult)new OkObjectResult($"Hello, {name}")
: new BadRequestObjectResult("Please pass a name on the query string or in the request body");
}
Now, you can use the new msg parameter to write to the output binding from your function code. Add the
following line of code before the success response to add the value of name to the msg output binding.
msg.setValue(name);
When you use an output binding, you don't have to use the Azure Storage SDK code for authentication, getting a
queue reference, or writing data. The Functions runtime and queue output binding do those tasks for you.
Your run method must now look like the following example:
if (name == null) {
return request.createResponseBuilder(HttpStatus.BAD_REQUEST)
.body("Please pass a name on the query string or in the request body").build();
} else {
// Write the name to the message queue.
msg.setValue(name);
@SuppressWarnings("unchecked")
final OutputBinding<String> msg = (OutputBinding<String>)mock(OutputBinding.class);
final HttpResponseMessage ret = new Function().run(req, msg, context);
Observe that you don't need to write any code for authentication, getting a queue reference, or writing data. All
these integration tasks are conveniently handled in the Azure Functions runtime and queue output binding.
func start
Toward the end of the output, the following lines must appear:
...
Http Functions:
NOTE
If HttpExample doesn't appear as shown above, you likely started the host from outside the root folder of the
project. In that case, use Ctrl+C to stop the host, go to the project's root folder, and run the previous command
again.
2. Copy the URL of your HttpExample function from this output to a browser and append the query string
?name=<YOUR_NAME> , making the full URL like http://localhost:7071/api/HttpExample?name=Functions . The
browser should display a response message that echoes back your query string value. The terminal in
which you started your project also shows log output as you make requests.
3. When you're done, press Ctrl + C and type y to stop the functions host.
TIP
During startup, the host downloads and installs the Storage binding extension and other Microsoft binding extensions.
This installation happens because binding extensions are enabled by default in the host.json file with the following
properties:
{
"version": "2.0",
"extensionBundle": {
"id": "Microsoft.Azure.Functions.ExtensionBundle",
"version": "[1.*, 2.0.0)"
}
}
If you encounter any errors related to binding extensions, check that the above properties are present in host.json.
bash
PowerShell
Azure CLI
export AZURE_STORAGE_CONNECTION_STRING="<MY_CONNECTION_STRING>"
2. (Optional) Use the az storage queue list command to view the Storage queues in your account. The
output from this command must include a queue named outqueue , which was created when the function
wrote its first message to that queue.
3. Use the az storage message get command to read the message from this queue, which should be the
value you supplied when testing the function earlier. The command reads and removes the first message
from the queue.
bash
PowerShell
Azure CLI
echo `echo $(az storage message get --queue-name outqueue -o tsv --query '[].{Message:content}') |
base64 --decode`
Because the message body is stored base64 encoded, the message must be decoded before it's displayed.
After you execute az storage message get , the message is removed from the queue. If there was only one
message in outqueue , you won't retrieve a message when you run this command a second time and
instead get an error.
In the local project folder, use the following Maven command to republish your project:
mvn azure-functions:deploy
Verify in Azure
1. As in the previous quickstart, use a browser or CURL to test the redeployed function.
Browser
curl
Copy the complete Invoke URL shown in the output of the publish command into a browser address
bar, appending the query parameter &name=Functions . The browser should display the same output as
when you ran the function locally.
2. Examine the Storage queue again, as described in the previous section, to verify that it contains the new
message written to the queue.
Clean up resources
After you've finished, use the following command to delete the resource group and all its contained resources to
avoid incurring further costs.
Next steps
You've updated your HTTP triggered function to write data to a Storage queue. Now you can learn more about
developing Functions from the command line using Core Tools and Azure CLI:
Work with Azure Functions Core Tools
Azure Functions triggers and bindings
Examples of complete Function projects in C#.
Azure Functions C# developer reference
Examples of complete Function projects in JavaScript.
Azure Functions JavaScript developer guide
Examples of complete Function projects in TypeScript.
Azure Functions TypeScript developer guide
Examples of complete Function projects in Python.
Azure Functions Python developer guide
Examples of complete Function projects in PowerShell.
Azure Functions PowerShell developer guide
Debug PowerShell Azure Functions locally
8/2/2022 • 7 minutes to read • Edit Online
PSFunctionApp
| - HttpTriggerFunction
| | - run.ps1
| | - function.json
| - local.settings.json
| - host.json
| - profile.ps1
This function app is similar to the one you get when you complete the PowerShell quickstart.
The function code in run.ps1 looks like the following script:
param($Request)
$name = $Request.Query.Name
if($name) {
$status = 200
$body = "Hello $name"
}
else {
$status = 400
$body = "Please pass a name on the query string or in the request body."
}
All you need to do is add a call to the Wait-Debugger cmdlet just above the if statement, as follows:
param($Request)
$name = $Request.Query.Name
if($name) {
$status = 200
$body = "Hello $name"
}
# ...
NOTE
Should your project not have the needed configuration files, you are prompted to add them.
NOTE
You need to ensure PSWorkerInProcConcurrencyUpperBound is set to 1 to ensure correct debugging experience in Visual
Studio Code. This is the default.
With your function app running, you need a separate PowerShell console to call the HTTP triggered function.
In this case, the PowerShell console is the client. The Invoke-RestMethod is used to trigger the function.
In a PowerShell console, run the following command:
Invoke-RestMethod "http://localhost:7071/api/HttpTrigger?Name=Functions"
You'll notice that a response isn't immediately returned. That's because Wait-Debugger has attached the
debugger and PowerShell execution went into break mode as soon as it could. This is because of the BreakAll
concept, which is explained later. After you press the continue button, the debugger now breaks on the line
right after Wait-Debugger .
At this point, the debugger is attached and you can do all the normal debugger operations. For more
information on using the debugger in Visual Studio Code, see the official documentation.
After you continue and fully invoke your script, you'll notice that:
The PowerShell console that did the Invoke-RestMethod has returned a result
The PowerShell Integrated Console in Visual Studio Code is waiting for a script to be executed
Later when you invoke the same function, the debugger in PowerShell extension breaks right after the
Wait-Debugger .
Open up a console, cd into the directory of your function app, and run the following command:
With the function app running and the Wait-Debugger in place, you can attach to the process. You do need two
more PowerShell consoles.
One of the consoles acts as the client. From this, you call Invoke-RestMethod to trigger the function. For example,
you can run the following command:
Invoke-RestMethod "http://localhost:7071/api/HttpTrigger?Name=Functions"
You'll notice that it doesn't return a response, which is a result of the Wait-Debugger . The PowerShell runspace is
now waiting for a debugger to be attached. Let's get that attached.
In the other PowerShell console, run the following command:
Get-PSHostProcessInfo
This cmdlet returns a table that looks like the following output:
Make note of the ProcessId for the item in the table with the ProcessName as dotnet . This process is your
function app.
Next, run the following snippet:
Once started, the debugger breaks and shows something like the following output:
Debugging Runspace: Runspace1
To end the debugging session type the 'Detach' command at the debugger prompt, or type 'Ctrl+C' otherwise.
At /Path/To/PSFunctionApp/HttpTriggerFunction/run.ps1:13 char:1
+ if($name) { ...
+ ~~~~~~~~~~~
[DBG]: [Process:49988]: [Runspace1]: PS /Path/To/PSFunctionApp>>
At this point, you're stopped at a breakpoint in the PowerShell debugger. From here, you can do all of the usual
debug operations, step over, step into, continue, quit, and others. To see the complete set of debug commands
available in the console, run the h or ? commands.
You can also set breakpoints at this level with the Set-PSBreakpoint cmdlet.
Once you continue and fully invoke your script, you'll notice that:
The PowerShell console where you executed Invoke-RestMethod has now returned a result.
The PowerShell console where you executed Debug-Runspace is waiting for a script to be executed.
You can invoke the same function again (using Invoke-RestMethod for example) and the debugger breaks in
right after the Wait-Debugger command.
Should this break happen, run the continue or c command to skip over this breakpoint. You then stop at the
expected breakpoint.
Troubleshooting
If you have difficulties during debugging, you should check for the following:
C H EC K A C T IO N
Run func --version from the terminal. If you get an error Reinstall Core Tools.
that func can't be found, Core Tools (func.exe) may be
missing from the local path variable.
In Visual Studio Code, the default terminal needs to have Set the default shell in Visual Studio Code to either
access to func.exe. Make sure you aren't using a default PowerShell 7 (recommended) or Windows PowerShell 5.1.
terminal that doesn't have Core Tools installed, such as
Windows Subsystem for Linux (WSL).
Next steps
To learn more about developing Functions using PowerShell, see Azure Functions PowerShell developer guide.
Azure Function Event Grid Trigger Local Debugging
8/2/2022 • 2 minutes to read • Edit Online
This article demonstrates how to debug a local function that handles an Azure Event Grid event raised by a
storage account.
Prerequisites
Create or use an existing function app
Create or use an existing storage account. Event Grid notification subscription can be set on Azure Storage
accounts for BlobStorage , StorageV2 , or Data Lake Storage Gen2.
Download ngrok to allow Azure to call your local function
Once the function is created, open the code file and copy the URL commented out at the top of the file. This
location is used when configuring the Event Grid trigger.
Then, set a breakpoint on the line that begins with log.LogInformation .
As the utility is set up, the command window should look similar to the following screenshot:
Copy the HTTPS URL generated when ngrok is run. This value is used when configuring the event grid event
endpoint.
4. Once the endpoint type is configured, click on Select an endpoint to configure the endpoint value.
The Subscriber Endpoint value is made up from three different values. The prefix is the HTTPS URL
generated by ngrok. The remainder of the URL comes from the localhost URL copied earlier in the how to
guide, with the function name added at the end. Starting with the localhost URL, the ngrok URL replaces
http://localhost:7071 and the function name replaces {functionname} .
5. The following screenshot shows an example of how the final URL should look when using an Event Grid
trigger type.
6. Once you've entered the appropriate value, click Confirm Selection .
IMPORTANT
Every time you start ngrok, the HTTPS URL is regenerated and the value changes. Therefore you must create a new Event
Subscription each time you expose your function to Azure via ngrok.
Upload a file
Now you can upload a file to your storage account to trigger an Event Grid event for your local function to
handle.
Open Storage Explorer and connect it to your storage account.
Expand Blob Containers
Right-click and select Create Blob Container .
Name the container samples-workitems
Select the samples-workitems container
Click the Upload button
Click Upload Files
Select a file and upload it to the blob container
Clean up resources
To clean up the resources created in this article, delete the test container in your storage account.
Next steps
Automate resizing uploaded images using Event Grid
Event Grid trigger for Azure Functions
Use dependency injection in .NET Azure Functions
8/2/2022 • 8 minutes to read • Edit Online
Azure Functions supports the dependency injection (DI) software design pattern, which is a technique to achieve
Inversion of Control (IoC) between classes and their dependencies.
Dependency injection in Azure Functions is built on the .NET Core Dependency Injection features.
Familiarity with .NET Core dependency injection is recommended. There are differences in how you
override dependencies and how configuration values are read with Azure Functions on the Consumption
plan.
Support for dependency injection begins with Azure Functions 2.x.
Dependency injection patterns differ depending on whether your C# functions run in-process or out-of-
process.
IMPORTANT
The guidance in this article applies only to C# class library functions, which run in-process with the runtime. This custom
dependency injection model doesn't apply to .NET isolated functions, which lets you run .NET 5.0 functions out-of-process.
The .NET isolated process model relies on regular ASP.NET Core dependency injection patterns. To learn more, see
Dependency injection in the .NET isolated process guide.
Prerequisites
Before you can use dependency injection, you must install the following NuGet packages:
Microsoft.Azure.Functions.Extensions
Microsoft.NET.Sdk.Functions package version 1.0.28 or later
Microsoft.Extensions.DependencyInjection (currently, only version 2.x or later supported)
Register services
To register services, create a method to configure and add components to an IFunctionsHostBuilder instance.
The Azure Functions host creates an instance of IFunctionsHostBuilder and passes it directly into your method.
To register the method, add the FunctionsStartup assembly attribute that specifies the type name used during
startup.
using Microsoft.Azure.Functions.Extensions.DependencyInjection;
using Microsoft.Extensions.DependencyInjection;
[assembly: FunctionsStartup(typeof(MyNamespace.Startup))]
namespace MyNamespace
{
public class Startup : FunctionsStartup
{
public override void Configure(IFunctionsHostBuilder builder)
{
builder.Services.AddHttpClient();
builder.Services.AddSingleton<IMyService>((s) => {
return new MyService();
});
builder.Services.AddSingleton<ILoggerProvider, MyLoggerProvider>();
}
}
}
This example uses the Microsoft.Extensions.Http package required to register an HttpClient at startup.
Caveats
A series of registration steps run before and after the runtime processes the startup class. Therefore, keep in
mind the following items:
The startup class is meant for only setup and registration. Avoid using services registered at startup
during the startup process. For instance, don't try to log a message in a logger that is being registered
during startup. This point of the registration process is too early for your services to be available for use.
After the Configure method is run, the Functions runtime continues to register additional dependencies,
which can affect how your services operate.
The dependency injection container only holds explicitly registered types. The only services available as
injectable types are what are setup in the Configure method. As a result, Functions-specific types like
BindingContext and ExecutionContext aren't available during setup or as injectable types.
namespace MyNamespace
{
public class MyHttpTrigger
{
private readonly HttpClient _client;
private readonly IMyService _service;
[FunctionName("MyHttpTrigger")]
public async Task<IActionResult> Run(
[HttpTrigger(AuthorizationLevel.Function, "get", "post", Route = null)] HttpRequest req,
ILogger log)
{
var response = await _client.GetAsync("https://microsoft.com");
var message = _service.GetMessage();
This example uses the Microsoft.Extensions.Http package required to register an HttpClient at startup.
Service lifetimes
Azure Functions apps provide the same service lifetimes as ASP.NET Dependency Injection. For a Functions app,
the different service lifetimes behave as follows:
Transient : Transient services are created upon each resolution of the service.
Scoped : The scoped service lifetime matches a function execution lifetime. Scoped services are created once
per function execution. Later requests for that service during the execution reuse the existing service
instance.
Singleton : The singleton service lifetime matches the host lifetime and is reused across function executions
on that instance. Singleton lifetime services are recommended for connections and clients, for example
DocumentClient or HttpClient instances.
Logging services
If you need your own logging provider, register a custom type as an instance of ILoggerProvider , which is
available through the Microsoft.Extensions.Logging.Abstractions NuGet package.
Application Insights is added by Azure Functions automatically.
WARNING
Don't add AddApplicationInsightsTelemetry() to the services collection, which registers services that conflict with
services provided by the environment.
Don't register your own TelemetryConfiguration or TelemetryClient if you are using the built-in Application
Insights functionality. If you need to configure your own TelemetryClient instance, create one via the injected
TelemetryConfiguration as shown in Log custom telemetry in C# functions.
namespace MyNamespace
{
public class HttpTrigger
{
private readonly ILogger<HttpTrigger> _log;
[FunctionName("HttpTrigger")]
public async Task<IActionResult> Run(
[HttpTrigger(AuthorizationLevel.Anonymous, "get", "post", Route = null)] HttpRequest req)
{
_log.LogInformation("C# HTTP trigger function processed a request.");
// ...
}
}
{
"version": "2.0",
"logging": {
"applicationInsights": {
"samplingSettings": {
"isEnabled": true,
"excludedTypes": "Request"
}
},
"logLevel": {
"MyNamespace.HttpTrigger": "Information"
}
}
}
For more information about log levels, see Configure log levels.
Singleton
Microsoft.Extensions.Configuration.IConfiguration Runtime configuration
Singleton
Microsoft.Azure.WebJobs.Host.Executors.IHostIdProvider Responsible for providing the ID of the
host instance
If there are other services you want to take a dependency on, create an issue and propose them on GitHub.
Overriding host services
Overriding services provided by the host is currently not supported. If there are services you want to override,
create an issue and propose them on GitHub.
And a local.settings.json file that might structure the custom setting as follows:
{
"IsEncrypted": false,
"Values": {
"MyOptions:MyCustomSetting": "Foobar"
}
}
From inside the Startup.Configure method, you can extract values from the IConfiguration instance into your
custom type using the following code:
builder.Services.AddOptions<MyOptions>()
.Configure<IConfiguration>((settings, configuration) =>
{
configuration.GetSection("MyOptions").Bind(settings);
});
Calling Bind copies values that have matching property names from the configuration into the custom
instance. The options instance is now available in the IoC container to inject into a function.
The options object is injected into the function as an instance of the generic IOptions interface. Use the Value
property to access the values found in your configuration.
using System;
using Microsoft.Extensions.Options;
Refer to Options pattern in ASP.NET Core for more details regarding working with options.
Then use the dotnet user-secrets set command to create or update secrets.
To access user secrets values in your function app code, use IConfiguration or IOptions .
To specify additional configuration sources, override the ConfigureAppConfiguration method in your function
app's StartUp class.
The following sample adds configuration values from a base and an optional environment-specific app settings
files.
using System.IO;
using Microsoft.Azure.Functions.Extensions.DependencyInjection;
using Microsoft.Extensions.Configuration;
using Microsoft.Extensions.DependencyInjection;
[assembly: FunctionsStartup(typeof(MyNamespace.Startup))]
namespace MyNamespace
{
public class Startup : FunctionsStartup
{
public override void ConfigureAppConfiguration(IFunctionsConfigurationBuilder builder)
{
FunctionsHostBuilderContext context = builder.GetContext();
builder.ConfigurationBuilder
.AddJsonFile(Path.Combine(context.ApplicationRootPath, "appsettings.json"), optional: true,
reloadOnChange: false)
.AddJsonFile(Path.Combine(context.ApplicationRootPath, $"appsettings.
{context.EnvironmentName}.json"), optional: true, reloadOnChange: false)
.AddEnvironmentVariables();
}
}
}
<None Update="appsettings.json">
<CopyToOutputDirectory>PreserveNewest</CopyToOutputDirectory>
</None>
<None Update="appsettings.Development.json">
<CopyToOutputDirectory>PreserveNewest</CopyToOutputDirectory>
<CopyToPublishDirectory>Never</CopyToPublishDirectory>
</None>
IMPORTANT
For function apps running in the Consumption or Premium plans, modifications to configuration values used in triggers
can cause scaling errors. Any changes to these properties by the FunctionsStartup class results in a function app
startup error.
Next steps
For more information, see the following resources:
How to monitor your function app
Best practices for functions
Manage connections in Azure Functions
8/2/2022 • 5 minutes to read • Edit Online
Functions in a function app share resources. Among those shared resources are connections: HTTP connections,
database connections, and connections to services such as Azure Storage. When many functions are running
concurrently in a Consumption plan, it's possible to run out of available connections. This article explains how to
code your functions to avoid using more connections than they need.
NOTE
Connection limits described in this article apply only when running in a Consumption plan. However, the techniques
described here may be beneficial when running on any plan.
Connection limit
The number of available connections in a Consumption plan is limited partly because a function app in this plan
runs in a sandbox environment. One of the restrictions that the sandbox imposes on your code is a limit on the
number of outbound connections, which is currently 600 active (1,200 total) connections per instance. When
you reach this limit, the functions runtime writes the following message to the logs:
Host thresholds exceeded: Connections . For more information, see the Functions service limits.
This limit is per instance. When the scale controller adds function app instances to handle more requests, each
instance has an independent connection limit. That means there's no global connection limit, and you can have
much more than 600 active connections across all active instances.
When troubleshooting, make sure that you have enabled Application Insights for your function app. Application
Insights lets you view metrics for your function apps like executions. For more information, see View telemetry
in Application Insights.
Static clients
To avoid holding more connections than necessary, reuse client instances rather than creating new ones with
each function invocation. We recommend reusing client connections for any language that you might write your
function in. For example, .NET clients like the HttpClient, DocumentClient, and Azure Storage clients can manage
connections if you use a single, static client.
Here are some guidelines to follow when you're using a service-specific client in an Azure Functions application:
Do not create a new client with every function invocation.
Do create a single, static client that every function invocation can use.
Consider creating a single, static client in a shared helper class if different functions use the same service.
A common question about HttpClient in .NET is "Should I dispose of my client?" In general, you dispose of
objects that implement IDisposable when you're done using them. But you don't dispose of a static client
because you aren't done using it when the function ends. You want the static client to live for the duration of
your application.
Azure Cosmos DB clients
C#
JavaScript
CosmosClient connects to an Azure Cosmos DB instance. The Azure Cosmos DB documentation recommends
that you use a singleton Azure Cosmos DB client for the lifetime of your application. The following example
shows one pattern for doing that in a function:
#r "Microsoft.Azure.Cosmos"
using Microsoft.Azure.Cosmos;
// Rest of function
}
Also, create a file named "function.proj" for your trigger and add the below content :
<Project Sdk="Microsoft.NET.Sdk">
<PropertyGroup>
<TargetFramework>netcoreapp3.1</TargetFramework>
</PropertyGroup>
<ItemGroup>
<PackageReference Include="Microsoft.Azure.Cosmos" Version="3.23.0" />
</ItemGroup>
</Project>
SqlClient connections
Your function code can use the .NET Framework Data Provider for SQL Server (SqlClient) to make connections
to a SQL relational database. This is also the underlying provider for data frameworks that rely on ADO.NET,
such as Entity Framework. Unlike HttpClient and DocumentClient connections, ADO.NET implements connection
pooling by default. But because you can still run out of connections, you should optimize connections to the
database. For more information, see SQL Server Connection Pooling (ADO.NET).
TIP
Some data frameworks, such as Entity Framework, typically get connection strings from the ConnectionStrings section
of a configuration file. In this case, you must explicitly add SQL database connection strings to the Connection strings
collection of your function app settings and in the local.settings.json file in your local project. If you're creating an instance
of SqlConnection in your function code, you should store the connection string value in Application settings with your
other connections.
Next steps
For more information about why we recommend static clients, see Improper instantiation antipattern.
For more Azure Functions performance tips, see Optimize the performance and reliability of Azure Functions.
Azure Functions error handling and retries
8/2/2022 • 7 minutes to read • Edit Online
Handling errors in Azure Functions is important to avoid lost data, missed events, and to monitor the health of
your application. It's also important to understand the retry behaviors of event-based triggers.
This article describes general strategies for error handling and the available retry strategies.
IMPORTANT
The retry policy support in the runtime for triggers other than Timer and Event Hubs is being removed after this feature
becomes generally available (GA). Preview retry policy support for all triggers other than Timer and Event Hubs will be
removed in October 2022.
Handling errors
Errors raised in an Azure Functions can come from any of the following origins:
Use of built-in Azure Functions triggers and bindings.
Calls to APIs of underlying Azure services.
Calls to REST endpoints.
Calls to client libraries, packages, or third-party APIs.
Good error handling practices are important to avoid loss of data or missed messages. This section describes
some recommended error handling practices with links to more information.
Enable Application Insights
Azure Functions integrates with Application Insights to collect error data, performance data, and runtime logs.
You should use Application Insights to discover and better understand errors occurring in your function
executions. To learn more, see Monitor Azure Functions.
Use structured error handling
Capturing and logging errors is critical to monitoring the health of your application. The top-most level of any
function code should include a try/catch block. In the catch block, you can capture and log errors. For
information about what errors might be raised by bindings, see Binding error codes.
Plan your retry strategy
Several Functions bindings extensions provide built-in support for retries. In addition, the runtime lets you
define retry policies for Timer and Event Hubs triggered functions. To learn more, see Retries. For triggers that
don't provide retry behaviors, you may want to implement your own retry scheme.
Design for idempotency
The occurrence of errors when processing data can be a problem for your functions, especially when processing
messages. You need to consider what happens when the error occurs and how to avoid duplicate processing. To
learn more, see Designing Azure Functions for identical input.
Retries
There are two kinds of retries available for your functions: built-in retry behaviors of individual trigger
extensions and retry policies. The following table indicates which triggers support retries and where the retry
behavior is configured. It also links to more information about errors coming from the underlying services.
Retry policies
Starting with version 3.x of the Azure Functions runtime, you can define a retry policies for Timer and Event
Hubs triggers that are enforced by the Functions runtime. The retry policy tells the runtime to rerun a failed
execution until either successful completion occurs or the maximum number of retries is reached.
A retry policy is evaluated when a Timer or Event Hubs triggered function raises an uncaught exception. As a
best practice, you should catch all exceptions in your code and rethrow any errors that you want to result in a
retry. Event Hubs checkpoints won't be written until the retry policy for the execution has completed. Because of
this behavior, progress on the specific partition is paused until the current batch has completed.
Retry strategies
There are two retry strategies supported by policy that you can configure:
Fixed delay
Exponential backoff
{
"disabled": false,
"bindings": [
{
....
}
],
"retry": {
"strategy": "fixedDelay",
"maxRetryCount": 4,
"delayInterval": "00:00:10"
}
}
if context.retry_context.retry_count == context.retry_context.max_retry_count:
logging.warn(
f"Max retries of {context.retry_context.max_retry_count} for "
f"function {context.function_name} has been reached")
Fixed delay
Exponential backoff
@FunctionName("TimerTriggerJava1")
@FixedDelayRetry(maxRetryCount = 4, delayInterval = "00:00:10")
public void run(
@TimerTrigger(name = "timerInfo", schedule = "0 */5 * * * *") String timerInfo,
final ExecutionContext context
) {
context.getLogger().info("Java Timer trigger function executed at: " + LocalDateTime.now());
}
Next steps
Azure Functions triggers and bindings concepts
Best practices for reliable Azure Functions
Manually run a non HTTP-triggered function
8/2/2022 • 2 minutes to read • Edit Online
This article demonstrates how to manually run a non HTTP-triggered function via specially formatted HTTP
request.
In some contexts, you may need to run "on-demand" an Azure Function that is indirectly triggered. Examples of
indirect triggers include functions on a schedule or functions that run as the result of another resource's action.
Postman is used in the following example, but you can use cURL, Fiddler or any other like tool to send HTTP
requests.
Host name: The function app's public location that is made up from the function app's name plus
azurewebsites.net or your custom domain.
Folder path: To access non HTTP-triggered functions via an HTTP request, you have to send the request
through the folders admin/functions.
Function name: The name of the function you want to run.
You use this request location in Postman along with the function's master key in the request to Azure to run the
function.
NOTE
When running locally, the function's master key is not required. You can directly call the function omitting the
x-functions-key header.
3. After copying the _master key, select Code + Test , and then select Logs . You'll see messages from the
function logged here when you manually run the function from Postman.
Cau t i on
Due to the elevated permissions in your function app granted by the master key, you should not share this key
with third parties or distribute it in an application. The key should only be sent to an HTTPS endpoint.
NOTE
If you don't want to pass data into the function, you must still pass an empty dictionary {} as the body of the
POST request.
8. Select Send .
Postman then reports a status of 202 Accepted .
9. Next, return to your function in the Azure portal. Review the logs and you'll see messages coming from
the manual call to the function.
Next steps
Strategies for testing your code in Azure Functions
Azure Function Event Grid Trigger Local Debugging
Bring dependencies or third party library to Azure
Functions
8/2/2022 • 5 minutes to read • Edit Online
In this article, you learn to bring in third-party dependencies into your functions apps. Examples of third-party
dependencies are json files, binary files and machine learning models.
In this article, you learn how to:
Bring in dependencies via Functions Code project
Bring in dependencies via mounting Azure Fileshare
<project_root>/
| - my_first_function/
| | - __init__.py
| | - function.json
| | - example.py
| - dependencies/
| | - dependency1
| - .funcignore
| - host.json
| - local.settings.json
By putting the dependencies in a folder inside functions app project directory, the dependencies folder will get
deployed together with the code. As a result, your function code can access the dependencies in the cloud via file
system api.
Accessing the dependencies in your code
Here's an example to access and execute ffmpeg dependency that is put into <project_root>/ffmpeg_lib
directory.
import logging
FFMPEG_RELATIVE_PATH = "../ffmpeg_lib/ffmpeg"
command = req.params.get('command')
# If no command specified, set the command to help
if not command:
command = "-h"
try:
byte_output = subprocess.check_output([ffmpeg_path, command])
return func.HttpResponse(byte_output.decode('UTF-8').rstrip(),status_code=200)
except Exception as e:
return func.HttpResponse("Unexpected exception happened when executing ffmpeg. Error message:" +
str(e),status_code=200)
NOTE
You may need to use chmod to provide Execute rights to the ffmpeg binary in a Linux environment
One of the simplest ways to bring in dependencies is to put the files/artifact together with the functions app
code in functions project directory structure. Here's an example of the directory samples in a Java functions
project:
<project_root>/
| - src/
| | - main/java/com/function
| | | - Function.java
| | - test/java/com/function
| - artifacts/
| | - dependency1
| - host.json
| - local.settings.json
| - pom.xml
For Java specifically, you need to specifically include the artifacts into the build/target folder when copying
resources. Here's an example on how to do it in Maven:
...
<execution>
<id>copy-resources</id>
<phase>package</phase>
<goals>
<goal>copy-resources</goal>
</goals>
<configuration>
<overwrite>true</overwrite>
<outputDirectory>${stagingDirectory}</outputDirectory>
<resources>
<resource>
<directory>${project.basedir}</directory>
<includes>
<include>host.json</include>
<include>local.settings.json</include>
<include>artifacts/**</include>
</includes>
</resource>
</resources>
</configuration>
</execution>
...
By putting the dependencies in a folder inside functions app project directory, the dependencies folder will get
deployed together with the code. As a result, your function code can access the dependencies in the cloud via file
system api.
Accessing the dependencies in your code
Here's an example to access and execute ffmpeg dependency that is put into <project_root>/ffmpeg_lib
directory.
public class Function {
final static String BASE_PATH = "BASE_PATH";
final static String FFMPEG_PATH = "/artifacts/ffmpeg/ffmpeg.exe";
final static String HELP_FLAG = "-h";
final static String COMMAND_QUERY = "command";
@FunctionName("HttpExample")
public HttpResponseMessage run(
@HttpTrigger(
name = "req",
methods = {HttpMethod.GET, HttpMethod.POST},
authLevel = AuthorizationLevel.ANONYMOUS)
HttpRequestMessage<Optional<String>> request,
final ExecutionContext context) throws IOException{
context.getLogger().info("Java HTTP trigger processed a request.");
Runtime rt = Runtime.getRuntime();
String[] commands = { System.getenv(BASE_PATH) + FFMPEG_PATH, flags};
Process proc = rt.exec(commands);
NOTE
To get this snippet of code to work in Azure, you need to specify a custom application setting of "BASE_PATH" with value
of "/home/site/wwwroot"
FLAG VA L UE
More commands to modify/delete the file share configuration can be found here
Uploading the dependencies to Azure Files
One option to upload your dependency into Azure Files is through Azure portal. Refer to this guide for
instruction to upload dependencies using portal. Other options to upload your dependencies into Azure Files are
through Azure CLI and PowerShell.
Accessing the dependencies in your code
After your dependencies are uploaded in the file share, you can access the dependencies from your code. The
mounted share is available at the specified mount-path, such as /path/to/mount . You can access the target
directory by using file system APIs.
The following example shows HTTP trigger code that accesses the ffmpeg library, which is stored in a mounted
file share.
import logging
FILE_SHARE_MOUNT_PATH = os.environ['FILE_SHARE_MOUNT_PATH']
FFMPEG = "ffmpeg"
command = req.params.get('command')
# If no command specified, set the command to help
if not command:
command = "-h"
try:
byte_output = subprocess.check_output(["/".join(FILE_SHARE_MOUNT_PATH, FFMPEG), command])
return func.HttpResponse(byte_output.decode('UTF-8').rstrip(),status_code=200)
except Exception as e:
return func.HttpResponse("Unexpected exception happened when executing ffmpeg. Error message:" +
str(e),status_code=200)
When you deploy this code to a function app in Azure, you need to create an app setting with a key name of
FILE_SHARE_MOUNT_PATH and value of the mounted file share path, which for this example is /azure-files-share .
To do local debugging, you need to populate the FILE_SHARE_MOUNT_PATH with the file path where your
dependencies are stored in your local machine. Here's an example to set FILE_SHARE_MOUNT_PATH using
local.settings.json :
{
"IsEncrypted": false,
"Values": {
"AzureWebJobsStorage": "",
"FUNCTIONS_WORKER_RUNTIME": "python",
"FILE_SHARE_MOUNT_PATH" : "PATH_TO_LOCAL_FFMPEG_DIR"
}
}
Next steps
Azure Functions Python developer guide
Azure Functions Java developer guide
Azure Functions developer reference
Develop Python worker extensions for Azure
Functions
8/2/2022 • 6 minutes to read • Edit Online
Azure Functions lets you integrate custom behaviors as part of Python function execution. This feature enables
you to create business logic that customers can easily use in their own function apps. To learn more, see the
Python developer reference.
In this tutorial, you'll learn how to:
Create an application-level Python worker extension for Azure Functions.
Consume your extension in an app the way your customers do.
Package and publish an extension for consumption.
Prerequisites
Before you start, you must meet these requirements:
Python 3.6.x or above. To check the full list of supported Python versions in Azure Functions, see the
Python developer guide.
The Azure Functions Core Tools, version 3.0.3568 or later.
Visual Studio Code installed on one of the supported platforms.
<python_worker_extension_root>/
| - .venv/
| - python_worker_extension_timer/
| | - __init__.py
| - setup.py
| - readme.md
In the following template, you should change author , author_email , install_requires , license , packages ,
and url fields as needed.
@classmethod
def init(cls):
# This records the starttime of each function
cls.start_timestamps: typing.Dict[str, float] = {}
@classmethod
def configure(cls, *args, append_to_http_response:bool=False, **kwargs):
# Customer can use TimerExtension.configure(append_to_http_response=)
# to decide whether the elapsed time should be shown in HTTP response
cls.append_to_http_response = append_to_http_response
@classmethod
def pre_invocation_app_level(
cls, logger: Logger, context: Context,
func_args: typing.Dict[str, object],
*args, **kwargs
) -> None:
logger.info(f'Recording start time of {context.function_name}')
cls.start_timestamps[context.invocation_id] = time()
@classmethod
def post_invocation_app_level(
cls, logger: Logger, context: Context,
func_args: typing.Dict[str, object],
func_ret: typing.Optional[object],
*args, **kwargs
) -> None:
if context.invocation_id in cls.start_timestamps:
# Get the start_time of the invocation
start_time: float = cls.start_timestamps.pop(context.invocation_id)
end_time: float = time()
# Calculate the elapsed time
elapsed_time = end_time - start_time
logger.info(f'Time taken to execute {context.function_name} is {elapsed_time} sec')
# Append the elapsed time to the end of HTTP response
# if the append_to_http_response is set to True
if cls.append_to_http_response and isinstance(func_ret, HttpResponse):
func_ret._HttpResponse__body += f' (TimeElapsed: {elapsed_time} sec)'.encode()
This code inherits from AppExtensionBase so that the extension applies to every function in the app. You could
have also implemented the extension on a function-level scope by inheriting from FuncExtensionBase.
The init method is a class method that's called by the worker when the extension class is imported. You can do
initialization actions here for the extension. In this case, a hash map is initialized for recording the invocation
start time for each function.
The configure method is customer-facing. In your readme file, you can tell your customers when they need to
call Extension.configure() . The readme should also document the extension capabilities, possible configuration,
and usage of your extension. In this example, customers can choose whether the elapsed time is reported in the
HttpResponse .
The pre_invocation_app_level method is called by the Python worker before the function runs. It provides the
information from the function, such as function context and arguments. In this example, the extension logs a
message and records the start time of an invocation based on its invocation_id.
Similarly, the post_invocation_app_level is called after function execution. This example calculates the elapsed
time based on the start time and current time. It also overwrites the return value of the HTTP response.
3. Use the following command to create a new HTTP trigger function that allows anonymous access:
Linux
Windows
source .venv/bin/activate
2. Install the extension from your local file path, in editable mode as follows:
In this example, replace <PYTHON_WORKER_EXTENSION_ROOT> with the file location of your extension project.
When a customer uses your extension, they'll instead add your extension package location to the
requirements.txt file, as in the following examples:
PyPI
GitHub
# requirements.txt
python_worker_extension_timer==1.0.0
3. Open the local.settings.json project file and add the following field to Values :
"PYTHON_ENABLE_WORKER_EXTENSIONS": "1"
When running in Azure, you instead add PYTHON_ENABLE_WORKER_EXTENSIONS=1 to the app settings in the
function app.
4. Add following two lines before the main function in __init.py__:
This code imports the TimerExtension module and sets the append_to_http_response configuration value.
Verify the extension
1. From your app project root folder, start the function host using func host start --verbose . You should
see the local endpoint of your function in the output as https://localhost:7071/api/HttpTrigger .
2. In the browser, send a GET request to https://localhost:7071/api/HttpTrigger . You should see a response
like the following, with the TimeElapsed data for the request appended.
This HTTP triggered function executed successfully. Pass a name in the query string or in
the request body for a personalized response. (TimeElapsed: 0.0009996891021728516 sec)
PyPI
GitHub
You may need to provide your PyPI account credentials during upload.
After these steps, customers can use your extension by including your package name in their requirements.txt.
For more information, see the official Python packaging tutorial.
Examples
You can view completed sample extension project from this article in the python_worker_extension_timer
sample repository.
OpenCensus integration is an open-source project that uses the extension interface to integrate telemetry
tracing in Azure Functions Python apps. See the opencensus-python-extensions-azure repository to
review the implementation of this Python worker extension.
Next steps
For more information about Azure Functions Python development, see the following resources:
Azure Functions Python developer guide
Best practices for Azure Functions
Azure Functions developer reference
Continuous deployment for Azure Functions
8/2/2022 • 2 minutes to read • Edit Online
You can use Azure Functions to deploy your code continuously by using source control integration. Source
control integration enables a workflow in which a code update triggers deployment to Azure. If you're new to
Azure Functions, get started by reviewing the Azure Functions overview.
Continuous deployment is a good option for projects where you integrate multiple and frequent contributions.
When you use continuous deployment, you maintain a single source of truth for your code, which allows teams
to easily collaborate. You can configure continuous deployment in Azure Functions from the following source
code locations:
Azure Repos
GitHub
Bitbucket
The unit of deployment for functions in Azure is the function app. All functions in a function app are deployed at
the same time. After you enable continuous deployment, access to function code in the Azure portal is
configured as read-only because the source of truth is set to be elsewhere.
NOTE
Continuous deployment is not yet supported for Linux apps running on a Consumption plan.
5. Review all details, and then select Finish to complete your deployment configuration.
When the process is finished, all code from the specified source is deployed to your app. At that point, changes
in the deployment source trigger a deployment of those changes to your function app in Azure.
NOTE
After you configure continuous integration, you can no longer edit your source files in the Functions portal. If you
originally published your code from your local computer, you may need to change the WEBSITE_RUN_FROM_PACKAGE
setting in your function app to a value of 0 .
Next steps
Best practices for Azure Functions
Azure Functions deployment slots
8/2/2022 • 7 minutes to read • Edit Online
Azure Functions deployment slots allow your function app to run different instances called "slots". Slots are
different environments exposed via a publicly available endpoint. One app instance is always mapped to the
production slot, and you can swap instances assigned to a slot on demand. Function apps running under the
Apps Service plan may have multiple slots, while under the Consumption plan only one slot is allowed.
The following reflect how functions are affected by swapping slots:
Traffic redirection is seamless; no requests are dropped because of a swap. This seamless behavior is a result
of the next function triggers being routed to the swapped slot.
Currently executing function are terminated during the swap. Please review Improve the performance and
reliability of Azure Functions to learn how to write stateless and defensive functions.
Swap operations
During a swap, one slot is considered the source and the other the target. The source slot has the instance of the
application that is applied to the target slot. The following steps ensure the target slot doesn't experience
downtime during a swap:
1. Apply settings: Settings from the target slot are applied to all instances of the source slot. For example,
the production settings are applied to the staging instance. The applied settings include the following
categories:
Slot-specific app settings and connection strings (if applicable)
Continuous deployment settings (if enabled)
App Service authentication settings (if enabled)
2. Wait for restar ts and availability: The swap waits for every instance in the source slot to complete its
restart and to be available for requests. If any instance fails to restart, the swap operation reverts all
changes to the source slot and stops the operation.
3. Update routing: If all instances on the source slot are warmed up successfully, the two slots complete
the swap by switching routing rules. After this step, the target slot (for example, the production slot) has
the app that's previously warmed up in the source slot.
4. Repeat operation: Now that the source slot has the pre-swap app previously in the target slot, complete
the same operation by applying all settings and restarting the instances for the source slot.
Keep in mind the following points:
At any point of the swap operation, initialization of the swapped apps happens on the source slot. The
target slot remains online while the source slot is prepared, whether the swap succeeds or fails.
To swap a staging slot with the production slot, make sure that the production slot is always the target
slot. This way, the swap operation doesn't affect your production app.
Settings related to event sources and bindings must be configured as deployment slot settings before
you start a swap. Marking them as "sticky" ahead of time ensures events and outputs are directed to the
proper instance.
Manage settings
Some configuration settings are slot-specific. The following lists detail which settings change when you swap
slots, and which remain the same.
Slot-specific settings :
Publishing endpoints
Custom domain names
Non-public certificates and TLS/SSL settings
Scale settings
IP restrictions
Always On
Diagnostic settings
Cross-origin resource sharing (CORS)
Non slot-specific settings :
General settings, such as framework version, 32/64-bit, web sockets
App settings (can be configured to stick to a slot)
Connection strings (can be configured to stick to a slot)
Handler mappings
Public certificates
Hybrid connections *
Virtual network integration *
Service endpoints *
Azure Content Delivery Network *
Features marked with an asterisk (*) are planned to be unswapped.
NOTE
Certain app settings that apply to unswapped settings are also not swapped. For example, since diagnostic settings are
not swapped, related app settings like WEBSITE_HTTPLOGGING_RETENTION_DAYS and
DIAGNOSTICS_AZUREBLOBRETENTIONDAYS are also not swapped, even if they don't show up as slot settings.
Create a deployment setting
You can mark settings as a deployment setting, which makes it "sticky". A sticky setting doesn't swap with the
app instance.
If you create a deployment setting in one slot, make sure to create the same setting with a unique value in any
other slot that is involved in a swap. This way, while a setting's value doesn't change, the setting names remain
consistent among slots. This name consistency ensures your code doesn't try to access a setting that is defined
in one slot but not another.
Use the following steps to create a deployment setting:
1. Navigate to Deployment slots in the function app, and then select the slot name.
2. Select Configuration , and then select the setting name you want to stick with the current slot.
3. Select Deployment slot setting , and then select OK .
4. Once setting section disappears, select Save to keep the changes
Deployment
Slots are empty when you create a slot. You can use any of the supported deployment technologies to deploy
your application to a slot.
Scaling
All slots scale to the same number of workers as the production slot.
For Consumption plans, the slot scales as the function app scales.
For App Service plans, the app scales to a fixed number of workers. Slots run on the same number of
workers as the app plan.
Add a slot
You can add a slot via the CLI or through the portal. The following steps demonstrate how to create a new slot in
the portal:
1. Navigate to your function app.
2. Select Deployment slots , and then select + Add Slot .
Swap slots
You can swap slots via the CLI or through the portal. The following steps demonstrate how to swap slots in the
portal:
1. Navigate to the function app.
2. Select Deployment slots , and then select Swap .
3. Verify the configuration settings for your swap and select Swap
The operation may take a moment while the swap operation is executing.
Roll back a swap
If a swap results in an error or you simply want to "undo" a swap, you can roll back to the initial state. To return
to the pre-swapped state, do another swap to reverse the swap.
Remove a slot
You can remove a slot via the CLI or through the portal. The following steps demonstrate how to remove a slot
in the portal:
1. Navigate to Deployment slots in the function app, and then select the slot name.
2. Select Delete .
3. Type the name of the deployment slot you want to delete, and then select Delete .
2. Under App Ser vice plan , select Change App Ser vice plan .
3. Select the plan you want to upgrade to, or create a new plan.
4. Select OK .
Considerations
Azure Functions deployment slots have the following considerations:
The number of slots available to an app depends on the plan. The Consumption plan is only allowed one
deployment slot. Additional slots are available for apps running under other plans. For details, see Service
limits.
Swapping a slot resets keys for apps that have an AzureWebJobsSecretStorageType app setting equal to files
.
When slots are enabled, your function app is set to read-only mode in the portal.
Next steps
Deployment technologies in Azure Functions
Continuous delivery with Azure Pipelines
8/2/2022 • 8 minutes to read • Edit Online
Use Azure Pipelines to automatically deploy to Azure Functions. Azure Pipelines lets you build, test, and deploy
with continuous integration (CI) and continuous delivery (CD) using Azure DevOps.
YAML pipelines are defined using a YAML file in your repository. A step is the smallest building block of a
pipeline and can be a script or task (pre-packaged script). Learn about the key concepts and components that
make up a pipeline.
YAML pipelines aren't available for Azure DevOps 2019 and earlier.
Prerequisites
A GitHub account, where you can create a repository. If you don't have one, you can create one for free.
An Azure DevOps organization. If you don't have one, you can create one for free. If your team already
has one, then make sure you're an administrator of the Azure DevOps project that you want to use.
An ability to run pipelines on Microsoft-hosted agents. You can either purchase a parallel job or you can
request a free tier.
If you already have an app at GitHub that you want to deploy, you can try creating a pipeline for that code.
To use sample code instead, fork this GitHub repo:
https://github.com/microsoft/devops-project-samples/tree/master/dotnet/aspnetcore/functionApp
You can use the following sample to create a YAML file to build a .NET app:
pool:
vmImage: 'windows-latest'
steps:
- script: |
dotnet restore
dotnet build --configuration Release
- task: DotNetCoreCLI@2
inputs:
command: publish
arguments: '--configuration Release --output publish_output'
projects: '*.csproj'
publishWebProjects: false
modifyOutputPath: false
zipAfterPublish: false
- task: ArchiveFiles@2
displayName: "Archive files"
inputs:
rootFolderOrFile: "$(System.DefaultWorkingDirectory)/publish_output"
includeRootFolder: false
archiveFile: "$(System.DefaultWorkingDirectory)/build$(Build.BuildId).zip"
- task: PublishBuildArtifacts@1
inputs:
PathtoPublish: '$(System.DefaultWorkingDirectory)/build$(Build.BuildId).zip'
artifactName: 'drop'
To deploy to Azure Functions, add the following snippet at the end of your azure-pipelines.yml file. The default
appType is Windows. You can specify Linux by setting the appType to functionAppLinux .
trigger:
- main
variables:
# Azure service connection established during pipeline creation
azureSubscription: <Name of your Azure subscription>
appName: <Name of the function app>
# Agent VM image name
vmImageName: 'ubuntu-latest'
The snippet assumes that the build steps in your YAML file produce the zip archive in the
$(System.ArtifactsDirectory) folder on your agent.
Deploy a container
You can automatically deploy your code to Azure Functions as a custom container after every successful build.
To learn more about containers, see Create a function on Linux using a custom container.
Deploy with the Azure Function App for Container task
YAML
Classic
The simplest way to deploy to a container is to use the Azure Function App on Container Deploy task.
To deploy, add the following snippet at the end of your YAML file:
trigger:
- main
variables:
# Container registry service connection established during pipeline creation
dockerRegistryServiceConnection: <Docker registry service connection>
imageRepository: <Name of your image repository>
containerRegistry: <Name of the Azure container registry>
dockerfilePath: '$(Build.SourcesDirectory)/Dockerfile'
tag: '$(Build.BuildId)'
The snippet pushes the Docker image to your Azure Container Registry. The Azure Function App on
Container Deploy task pulls the appropriate Docker image corresponding to the BuildId from the repository
specified, and then deploys the image.
Deploy to a slot
YAML
Classic
You can configure your function app to have multiple slots. Slots allow you to safely deploy your app and test it
before making it available to your customers.
The following YAML snippet shows how to deploy to a staging slot, and then swap to a production slot:
- task: AzureFunctionApp@1
inputs:
azureSubscription: <Azure service connection>
appType: functionAppLinux
appName: <Name of the Function app>
package: $(System.ArtifactsDirectory)/**/*.zip
deployToSlotOrASE: true
resourceGroupName: <Name of the resource group>
slotName: staging
- task: AzureAppServiceManage@0
inputs:
azureSubscription: <Azure service connection>
WebAppName: <name of the Function app>
ResourceGroupName: <name of resource group>
SourceSlot: staging
SwapWithProduction: true
Next steps
Review the Azure Functions overview.
Review the Azure DevOps overview.
Continuous delivery by using GitHub Action
8/2/2022 • 8 minutes to read • Edit Online
Use GitHub Actions to define a workflow to automatically build and deploy code to your function app in Azure
Functions.
In GitHub Actions, a workflow is an automated process that you define in your GitHub repository. This process
tells GitHub how to build and deploy your function app project on GitHub.
A workflow is defined by a YAML (.yml) file in the /.github/workflows/ path in your repository. This definition
contains the various steps and parameters that make up the workflow.
For an Azure Functions workflow, the file has three sections:
SEC T IO N TA SK S
Prerequisites
An Azure account with an active subscription. Create an account for free.
A GitHub account. If you don't have one, sign up for free.
A working function app hosted on Azure with a GitHub repository.
Quickstart: Create a function in Azure using Visual Studio Code
2. Add a new secret using AZURE_FUNCTIONAPP_PUBLISH_PROFILE for Name , the content of the publishing
profile file for Value , and then select Add secret .
GitHub can now authenticate to your function app in Azure.
.NET
Java
JavaScript
Python
env:
AZURE_FUNCTIONAPP_PACKAGE_PATH: '.' # set this to the path to your web app project, defaults to the
repository root
PA RA M ET ER EXP L A N AT IO N
publish-profile (Optional) The name of the GitHub secret for your publish
profile.
The following example uses version 1 of the functions-action and a publish profile for authentication
.NET
Java
JavaScript
Python
on:
[push]
env:
AZURE_FUNCTIONAPP_NAME: your-app-name # set this to your application's name
AZURE_FUNCTIONAPP_PACKAGE_PATH: '.' # set this to the path to your web app project, defaults to the
repository root
DOTNET_VERSION: '2.2.402' # set this to the dotnet version to use
jobs:
build-and-deploy:
runs-on: ubuntu-latest
steps:
- name: 'Checkout GitHub Action'
uses: actions/checkout@v2
on:
[push]
env:
AZURE_FUNCTIONAPP_NAME: your-app-name # set this to your application's name
AZURE_FUNCTIONAPP_PACKAGE_PATH: '.' # set this to the path to your web app project, defaults to the
repository root
DOTNET_VERSION: '2.2.402' # set this to the dotnet version to use
jobs:
build-and-deploy:
runs-on: windows-latest
steps:
- name: 'Checkout GitHub Action'
uses: actions/checkout@v2
Next steps
Learn more about Azure and GitHub integration
Zip deployment for Azure Functions
8/2/2022 • 7 minutes to read • Edit Online
This article describes how to deploy your function app project files to Azure from a .zip (compressed) file. You
learn how to do a push deployment, both by using Azure CLI and by using the REST APIs. Azure Functions Core
Tools also uses these deployment APIs when publishing a local project to Azure.
Zip deployment is also an easy way to run your functions from the deployment package. To learn more, see Run
your functions from a package file in Azure.
Azure Functions has the full range of continuous deployment and integration options that are provided by Azure
App Service. For more information, see Continuous deployment for Azure Functions.
To speed up development, you may find it easier to deploy your function app project files directly from a .zip file.
The .zip deployment API takes the contents of a .zip file and extracts the contents into the wwwroot folder of your
function app. This .zip file deployment uses the same Kudu service that powers continuous integration-based
deployments, including:
Deletion of files that were left over from earlier deployments.
Deployment customization, including running deployment scripts.
Deployment logs.
Syncing function triggers in a Consumption plan function app.
For more information, see the .zip deployment reference.
IMPORTANT
When you use .zip deployment, any files from an existing deployment that aren't found in the .zip file are deleted from
your function app.
The code for all the functions in a specific function app is located in a root project folder that contains a host
configuration file. The host.json file contains runtime-specific configurations and is in the root folder of the
function app. A bin folder contains packages and other library files that the function app requires. Specific folder
structures required by the function app depend on language:
C# compiled (.csproj)
C# script (.csx)
F# script
Java
JavaScript
PowerShell
Python
In version 2.x and higher of the Functions runtime, all functions in the function app must share the same
language stack.
A function app includes all of the files and folders in the wwwroot directory. A .zip file deployment includes the
contents of the wwwroot directory, but not the directory itself. When deploying a C# class library project, you
must include the compiled library files and dependencies in a bin subfolder in your .zip package.
When you are developing on a local computer, you can manually create a .zip file of the function app project
folder using built-in .zip compression functionality or third-party tools.
This command deploys project files from the downloaded .zip file to your function app in Azure. It then restarts
the app. To view the list of deployments for this function app, you must use the REST APIs.
When you're using Azure CLI on your local computer, <zip_file_path> is the path to the .zip file on your
computer. You can also run Azure CLI in Azure Cloud Shell. When you use Cloud Shell, you must first upload
your deployment .zip file to the Azure Files account that's associated with your Cloud Shell. In that case,
<zip_file_path> is the storage location that your Cloud Shell account uses. For more information, see Persist
files in Azure Cloud Shell.
This request triggers push deployment from the uploaded .zip file. You can review the current and past
deployments by using the https://<app_name>.scm.azurewebsites.net/api/deployments endpoint, as shown in the
following cURL example. Again, replace <app_name> with the name of your app and <deployment_user> with the
username of your deployment credentials.
curl -X POST \
--data-binary @"<zip_file_path>" \
-H "Authorization: Bearer <access_token>" \
"https://<app_name>.scm.azurewebsites.net/api/zipdeploy"
With PowerShell
The following example uses Publish-AzWebapp upload the .zip file. Replace the placeholders <group-name> ,
<app-name> , and <zip-file-path> .
This request triggers push deployment from the uploaded .zip file.
To review the current and past deployments, run the following commands. Again, replace the <deployment-user>
, <deployment-password> , and <app-name> placeholders.
$username = "<deployment-user>"
$password = "<deployment-password>"
$apiUrl = "https://<app-name>.scm.azurewebsites.net/api/deployments"
$base64AuthInfo = [Convert]::ToBase64String([Text.Encoding]::ASCII.GetBytes(("{0}:{1}" -f $username,
$password)))
$userAgent = "powershell/1.0"
Invoke-RestMethod -Uri $apiUrl -Headers @{Authorization=("Basic {0}" -f $base64AuthInfo)} -UserAgent
$userAgent -Method GET
Deployment customization
The deployment process assumes that the .zip file that you push contains a ready-to-run app. By default, no
customizations are run. To enable the same build processes that you get with continuous integration, add the
following to your application settings:
SCM_DO_BUILD_DURING_DEPLOYMENT=true
When you use .zip push deployment, this setting is false by default. The default is true for continuous
integration deployments. When set to true , your deployment-related settings are used during deployment. You
can configure these settings either as app settings or in a .deployment configuration file that's located in the root
of your .zip file. For more information, see Repository and deployment-related settings in the deployment
reference.
The downloaded .zip file is in the correct format to be republished to your function app by using
.zip push deployment. The portal download can also add the files needed to open your function
app directly in Visual Studio.
Using REST APIs:
Use the following deployment GET API to download the files from your <function_app> project:
https://<function_app>.scm.azurewebsites.net/api/zip/site/wwwroot/
Including /site/wwwroot/ makes sure your zip file includes only the function app project files and not the
entire site. If you are not already signed in to Azure, you will be asked to do so.
You can also download a .zip file from a GitHub repository. When you download a GitHub repository as a .zip
file, GitHub adds an extra folder level for the branch. This extra folder level means that you can't deploy the .zip
file directly as you downloaded it from GitHub. If you're using a GitHub repository to maintain your function
app, you should use continuous integration to deploy your app.
Next steps
Run your functions from a package file in Azure
Run your functions from a package file in Azure
8/2/2022 • 6 minutes to read • Edit Online
In Azure, you can run your functions directly from a deployment package file in your function app. The other
option is to deploy your files in the d:\home\site\wwwroot (Windows) or /home/site/wwwroot (Linux) directory of
your function app.
This article describes the benefits of running your functions from a package. It also shows how to enable this
functionality in your function app.
VA L UE DESC RIP T IO N
1 Indicates that the function app runs from a local package file
deployed in the d:\home\data\SitePackages (Windows) or
/home/data/SitePackages (Linux) folder of your function
app.
<URL> Sets a URL that is the remote location of the specific package
file you want to run. Required for functions apps running on
Linux in a Consumption plan.
The following table indicates the recommended WEBSITE_RUN_FROM_PACKAGE options for deployment to a specific
operating system and hosting plan:
H O ST IN G P L A N W IN DO W S L IN UX
General considerations
The package file must be .zip formatted. Tar and gzip formats aren't currently supported.
Zip deployment is recommended.
When deploying your function app to Windows, you should set WEBSITE_RUN_FROM_PACKAGE to 1 and publish
with zip deployment.
When you run from a package, the wwwroot folder becomes read-only and you'll receive an error when
writing files to this directory. Files are also read-only in the Azure portal.
The maximum size for a deployment package file is currently 1 GB.
You can't use local cache when running from a deployment package.
If your project needs to use remote build, don't use the WEBSITE_RUN_FROM_PACKAGE app setting. Instead add
the SCM_DO_BUILD_DURING_DEPLOYMENT=true deployment customization app setting. For Linux, also add the
ENABLE_ORYX_BUILD=true setting. To learn more, see Remote build.
Using WEBSITE_RUN_FROM_PACKAGE = 1
This section provides information about how to run your function app from a local package file.
Considerations for deploying from an on-site package
Using an on-site package is the recommended option for running from the deployment package, except on
Linux hosted in a Consumption plan.
Zip deployment is the recommended way to upload a deployment package to your site.
When not using zip deployment, make sure the d:\home\data\SitePackages (Windows) or
/home/data/SitePackages (Linux) folder has a file named packagename.txt . This file contains only the name,
without any whitespace, of the package file in this folder that's currently running.
Integration with zip deployment
Zip deployment is a feature of Azure App Service that lets you deploy your function app project to the wwwroot
directory. The project is packaged as a .zip deployment file. The same APIs can be used to deploy your package
to the d:\home\data\SitePackages (Windows) or /home/data/SitePackages (Linux) folder.
With the WEBSITE_RUN_FROM_PACKAGE app setting value of 1 , the zip deployment APIs copy your package to the
d:\home\data\SitePackages (Windows) or /home/data/SitePackages (Linux) folder instead of extracting the files
to d:\home\site\wwwroot (Windows) or /home/site/wwwroot (Linux). It also creates the packagename.txt file. After
a restart, the package is mounted to wwwroot as a read-only filesystem. For more information about zip
deployment, see Zip deployment for Azure Functions.
NOTE
When a deployment occurs, a restart of the function app is triggered. Function executions currently running during the
deploy are terminated. Please review Improve the performance and reliability of Azure Functions to learn how to write
stateless and defensive functions.
When running a function app on Windows, the app setting WEBSITE_RUN_FROM_PACKAGE = <URL> gives worse
cold-start performance and isn't recommended.
When you specify a URL, you must also manually sync triggers after you publish an updated package.
The Functions runtime must have permissions to access the package URL.
You shouldn't deploy your package to Azure Blob Storage as a public blob. Instead, use a private container
with a Shared Access Signature (SAS) or use a managed identity to enable the Functions runtime to access
the package.
When running on a Premium plan, make sure to eliminate cold starts.
When running on a Dedicated plan, make sure you've enabled Always On.
You can use the Azure Storage Explorer to upload package files to blob containers in your storage account.
Manually uploading a package to Blob Storage
To deploy a zipped package when using the URL option, you must create a .zip compressed deployment package
and upload it to the destination. This example deploys to a container in Blob Storage.
1. Create a .zip package for your project using the utility of your choice.
2. In the Azure portal, search for your storage account name or browse for it in storage accounts.
3. In the storage account, select Containers under Data storage .
4. Select + Container to create a new Blob Storage container in your account.
5. In the New container page, provide a Name (for example, "deployments"), make sure the Public
access level is Private , and select Create .
6. Select the container you created, select Upload , browse to the location of the .zip file you created with
your project, and select Upload .
7. After the upload completes, choose your uploaded blob file, and copy the URL. You may need to generate
a SAS URL if you aren't using an identity
8. Search for your function app or browse for it in the Function App page.
9. In your function app, select Configurations under Settings .
10. In the Application Settings tab, select New application setting
11. Enter the value WEBSITE_RUN_FROM_PACKAGE for the Name , and paste the URL of your package in Blob
Storage as the Value .
12. Select OK . Then select Save > Continue to save the setting and restart the app.
Now you can run your function in Azure to verify that deployment has succeeded using the deployment
package .zip file.
The following shows a function app configured to run from a .zip file hosted in Azure Blob storage:
Fetch a package from Azure Blob Storage using a managed identity
Azure Blob Storage can be configured to authorize requests with Azure AD. This means that instead of
generating a SAS key with an expiration, you can instead rely on the application's managed identity. By default,
the app's system-assigned identity will be used. If you wish to specify a user-assigned identity, you can set the
WEBSITE_RUN_FROM_PACKAGE_BLOB_MI_RESOURCE_ID app setting to the resource ID of that identity. The setting can also
accept "SystemAssigned" as a value, although this is the same as omitting the setting altogether.
To enable the package to be fetched using the identity:
1. Ensure that the blob is configured for private access.
2. Grant the identity the Storage Blob Data Reader role with scope over the package blob. See Assign an
Azure role for access to blob data for details on creating the role assignment.
3. Set the WEBSITE_RUN_FROM_PACKAGE application setting to the blob URL of the package. This will likely be of
the form "https://{storage-account-name}.blob.core.windows.net/{container-name}/{path-to-package}" or
similar.
Next steps
Continuous deployment for Azure Functions
Azure Functions on Kubernetes with KEDA
8/2/2022 • 3 minutes to read • Edit Online
The Azure Functions runtime provides flexibility in hosting where and how you want. KEDA (Kubernetes-based
Event Driven Autoscaling) pairs seamlessly with the Azure Functions runtime and tooling to provide event
driven scale in Kubernetes.
To learn more about Dockerfile generation, see the func init reference.
To build an image and deploy your functions to Kubernetes, run the following command:
In this example, replace <name-of-function-deployment> with the name of your function app.
The deploy command does the following:
1. The Dockerfile created earlier is used to build a local image for the function app.
2. The local image is tagged and pushed to the container registry where the user is logged in.
3. A manifest is created and applied to the cluster that defines a Kubernetes Deployment resource, a
ScaledObject resource, and Secrets , which includes environment variables imported from your
local.settings.json file.
Next Steps
For more information, see the following resources:
Create a function using a custom image
Code and test Azure Functions locally
How the Azure Function Consumption plan works
Automate resource deployment for your function
app in Azure Functions
8/2/2022 • 15 minutes to read • Edit Online
You can use an Azure Resource Manager template to deploy a function app. This article outlines the required
resources and parameters for doing so. You might need to deploy other resources, depending on the triggers
and bindings in your function app.
For more information about creating templates, see Authoring Azure Resource Manager templates.
For sample templates, see:
ARM templates for function app deployment
Function app on Consumption plan
Function app on Azure App Service plan
Required resources
An Azure Functions deployment typically consists of these resources:
1A hosting plan is only required when you choose to run your function app on a Premium plan or on an App
Service plan.
TIP
While not required, it is strongly recommended that you configure Application Insights for your app.
Storage account
An Azure storage account is required for a function app. You need a general purpose account that supports
blobs, tables, queues, and files. For more information, see Azure Functions storage account requirements.
{
"type": "Microsoft.Storage/storageAccounts",
"name": "[variables('storageAccountName')]",
"apiVersion": "2019-06-01",
"location": "[resourceGroup().location]",
"kind": "StorageV2",
"sku": {
"name": "[parameters('storageAccountType')]"
}
}
You must also specify the AzureWebJobsStorage property as an app setting in the site configuration. If the
function app doesn't use Application Insights for monitoring, it should also specify AzureWebJobsDashboard as an
app setting.
The Azure Functions runtime uses the AzureWebJobsStorage connection string to create internal queues. When
Application Insights is not enabled, the runtime uses the AzureWebJobsDashboard connection string to log to
Azure Table storage and power the Monitor tab in the portal.
These properties are specified in the appSettings collection in the siteConfig object:
"appSettings": [
{
"name": "AzureWebJobsStorage",
"value": "[concat('DefaultEndpointsProtocol=https;AccountName=', variables('storageAccountName'),
';AccountKey=', listKeys(variables('storageAccountid'),'2019-06-01').keys[0].value)]"
},
{
"name": "AzureWebJobsDashboard",
"value": "[concat('DefaultEndpointsProtocol=https;AccountName=', variables('storageAccountName'),
';AccountKey=', listKeys(variables('storageAccountid'),'2019-06-01').keys[0].value)]"
}
]
Application Insights
Application Insights is recommended for monitoring your function apps. The Application Insights resource is
defined with the type Microsoft.Insights/components and the kind web :
{
"apiVersion": "2015-05-01",
"name": "[variables('appInsightsName')]",
"type": "Microsoft.Insights/components",
"kind": "web",
"location": "[resourceGroup().location]",
"tags": {
"[concat('hidden-link:', resourceGroup().id, '/providers/Microsoft.Web/sites/',
variables('functionAppName'))]": "Resource"
},
"properties": {
"Application_Type": "web",
"ApplicationId": "[variables('appInsightsName')]"
}
},
In addition, the instrumentation key needs to be provided to the function app using the
APPINSIGHTS_INSTRUMENTATIONKEY application setting. This property is specified in the appSettings collection in
the siteConfig object:
"appSettings": [
{
"name": "APPINSIGHTS_INSTRUMENTATIONKEY",
"value": "[reference(resourceId('microsoft.insights/components/', variables('appInsightsName')),
'2015-05-01').InstrumentationKey]"
}
]
Hosting plan
The definition of the hosting plan varies, and can be one of the following:
Consumption plan (default)
Premium plan
App Service plan
Function app
The function app resource is defined by using a resource of type Microsoft.Web/sites and kind functionapp :
{
"apiVersion": "2015-08-01",
"type": "Microsoft.Web/sites",
"name": "[variables('functionAppName')]",
"location": "[resourceGroup().location]",
"kind": "functionapp",
"dependsOn": [
"[resourceId('Microsoft.Storage/storageAccounts', variables('storageAccountName'))]",
"[resourceId('Microsoft.Insights/components', variables('appInsightsName'))]"
]
}
IMPORTANT
If you are explicitly defining a hosting plan, an additional item would be needed in the dependsOn array:
"[resourceId('Microsoft.Web/serverfarms', variables('hostingPlanName'))]"
FUNCTIONS_WORKER_RUNTIME The language stack to be used for dotnet , node , java , python , or
functions in this app powershell
These properties are specified in the appSettings collection in the siteConfig property:
"properties": {
"siteConfig": {
"appSettings": [
{
"name": "AzureWebJobsStorage",
"value": "[concat('DefaultEndpointsProtocol=https;AccountName=',
variables('storageAccountName'), ';AccountKey=', listKeys(variables('storageAccountid'),'2019-06-
01').keys[0].value)]"
},
{
"name": "FUNCTIONS_WORKER_RUNTIME",
"value": "node"
},
{
"name": "WEBSITE_NODE_DEFAULT_VERSION",
"value": "~14"
},
{
"name": "FUNCTIONS_EXTENSION_VERSION",
"value": "~4"
}
]
}
}
Windows
Linux
{
"type": "Microsoft.Web/serverfarms",
"apiVersion": "2021-02-01",
"name": "[variables('hostingPlanName')]",
"location": "[parameters('location')]",
"sku": {
"name": "Y1",
"tier": "Dynamic",
"size": "Y1",
"family": "Y",
"capacity":0
},
"properties": {
"name":"[variables('hostingPlanName')]",
"computeMode": "Dynamic"
}
}
On Windows, a Consumption plan requires another two other settings in the site configuration:
WEBSITE_CONTENTAZUREFILECONNECTIONSTRING and WEBSITE_CONTENTSHARE . This property configures the storage
account where the function app code and configuration are stored.
For a sample Azure Resource Manager template, see Azure Function App Hosted on Windows Consumption
Plan.
{
"type": "Microsoft.Web/sites",
"apiVersion": "2021-02-01",
"name": "[parameters('functionAppName')]",
"location": "[parameters('location')]",
"kind": "functionapp",
"dependsOn": [
"[resourceId('Microsoft.Web/serverfarms', variables('hostingPlanName'))]",
"[resourceId('Microsoft.Storage/storageAccounts', variables('storageAccountName'))]",
"[resourceId('Microsoft.Insights/components', variables('applicationInsightsName'))]"
],
"properties": {
"serverFarmId": "[resourceId('Microsoft.Web/serverfarms', variables('hostingPlanName'))]",
"siteConfig": {
"appSettings": [
{
"name": "APPINSIGHTS_INSTRUMENTATIONKEY",
"value": "[reference(resourceId('microsoft.insights/components',
variables('applicationInsightsName')), '2015-05-01').InstrumentationKey]"
},
{
"name": "AzureWebJobsStorage",
"value": "[concat('DefaultEndpointsProtocol=https;AccountName=', variables('storageAccountName'),
';EndpointSuffix=', environment().suffixes.storage,
';AccountKey=',listKeys(resourceId('Microsoft.Storage/storageAccounts', variables('storageAccountName')),
'2019-06-01').keys[0].value)]"
},
{
"name": "WEBSITE_CONTENTAZUREFILECONNECTIONSTRING",
"value": "[concat('DefaultEndpointsProtocol=https;AccountName=', variables('storageAccountName'),
';EndpointSuffix=', environment().suffixes.storage,
';AccountKey=',listKeys(resourceId('Microsoft.Storage/storageAccounts', variables('storageAccountName')),
'2019-06-01').keys[0].value)]"
},
{
"name": "WEBSITE_CONTENTSHARE",
"value": "[toLower(parameters('functionAppName'))]"
},
{
"name": "FUNCTIONS_EXTENSION_VERSION",
"value": "~4"
},
{
"name": "FUNCTIONS_WORKER_RUNTIME",
"value": "node"
},
{
"name": "WEBSITE_NODE_DEFAULT_VERSION",
"value": "~14"
}
]
}
}
}
IMPORTANT
Do not need to set the WEBSITE_CONTENTSHARE setting in a deployment slot. This setting is generated for you when the
app is created in the deployment slot.
Windows
Linux
{
"type": "Microsoft.Web/serverfarms",
"apiVersion": "2021-02-01",
"name": "[variables('hostingPlanName')]",
"location": "[parameters('location')]",
"sku": {
"tier": "ElasticPremium",
"name": "EP1",
"family": "EP"
},
"properties": {
"name": "[parameters('hostingPlanName')]",
"maximumElasticWorkerCount": 20
},
"kind": "elastic"
}
Windows
Linux
{
"type": "Microsoft.Web/sites",
"apiVersion": "2021-02-01",
"name": "[parameters('functionAppName')]",
"location": "[parameters('location')]",
"kind": "functionapp",
"dependsOn": [
"[resourceId('Microsoft.Web/serverfarms', variables('hostingPlanName'))]",
"[resourceId('Microsoft.Storage/storageAccounts', variables('storageAccountName'))]",
"[resourceId('Microsoft.Insights/components', variables('applicationInsightsName'))]"
],
"properties": {
"serverFarmId": "[resourceId('Microsoft.Web/serverfarms', variables('hostingPlanName'))]",
"siteConfig": {
"appSettings": [
{
"name": "APPINSIGHTS_INSTRUMENTATIONKEY",
"value": "[reference(resourceId('microsoft.insights/components',
variables('applicationInsightsName')), '2015-05-01').InstrumentationKey]"
},
{
"name": "AzureWebJobsStorage",
"value": "[concat('DefaultEndpointsProtocol=https;AccountName=', variables('storageAccountName'),
';EndpointSuffix=', environment().suffixes.storage,
';AccountKey=',listKeys(resourceId('Microsoft.Storage/storageAccounts', variables('storageAccountName')),
'2019-06-01').keys[0].value)]"
},
{
"name": "WEBSITE_CONTENTAZUREFILECONNECTIONSTRING",
"value": "[concat('DefaultEndpointsProtocol=https;AccountName=', variables('storageAccountName'),
';EndpointSuffix=', environment().suffixes.storage,
';AccountKey=',listKeys(resourceId('Microsoft.Storage/storageAccounts', variables('storageAccountName')),
'2019-06-01').keys[0].value)]"
},
{
"name": "WEBSITE_CONTENTSHARE",
"value": "[toLower(parameters('functionAppName'))]"
},
{
"name": "FUNCTIONS_EXTENSION_VERSION",
"value": "~4"
},
{
"name": "FUNCTIONS_WORKER_RUNTIME",
"value": "node"
},
{
"name": "WEBSITE_NODE_DEFAULT_VERSION",
"value": "~14"
}
]
}
}
}
IMPORTANT
You don't need to set the WEBSITE_CONTENTSHARE setting because it's generated for you when the site is first created.
Windows
Linux
{
"type": "Microsoft.Web/serverfarms",
"apiVersion": "2021-02-01",
"name": "[variables('hostingPlanName')]",
"location": "[parameters('location')]",
"sku": {
"tier": "Standard",
"name": "S1",
"size": "S1",
"family": "S",
"capacity": 1
}
}
Windows
Linux
{
"type": "Microsoft.Web/sites",
"apiVersion": "2021-02-01",
"name": "[parameters('functionAppName')]",
"location": "[parameters('location')]",
"kind": "functionapp",
"dependsOn": [
"[resourceId('Microsoft.Web/serverfarms', variables('hostingPlanName'))]",
"[resourceId('Microsoft.Storage/storageAccounts', variables('storageAccountName'))]",
"[resourceId('Microsoft.Insights/components', variables('applicationInsightsName'))]"
],
"properties": {
"serverFarmId": "[resourceId('Microsoft.Web/serverfarms', variables('hostingPlanName'))]",
"siteConfig": {
"alwaysOn": true,
"appSettings": [
{
"name": "APPINSIGHTS_INSTRUMENTATIONKEY",
"value": "[reference(resourceId('microsoft.insights/components',
variables('applicationInsightsName')), '2015-05-01').InstrumentationKey]"
},
{
"name": "AzureWebJobsStorage",
"value": "[concat('DefaultEndpointsProtocol=https;AccountName=', variables('storageAccountName'),
';EndpointSuffix=', environment().suffixes.storage,
';AccountKey=',listKeys(resourceId('Microsoft.Storage/storageAccounts', variables('storageAccountName')),
'2019-06-01').keys[0].value)]"
},
{
"name": "FUNCTIONS_EXTENSION_VERSION",
"value": "~4"
},
{
"name": "FUNCTIONS_WORKER_RUNTIME",
"value": "node"
},
{
"name": "WEBSITE_NODE_DEFAULT_VERSION",
"value": "~14"
}
]
}
}
}
Both sites and plans must reference the custom location through an extendedLocation field. This block sits
outside of properties , peer to kind and location :
{
"extendedLocation": {
"type": "customlocation",
"name": "[parameters('customLocationId')]"
},
}
The plan resource should use the Kubernetes (K1) SKU, and its kind field should be "linux,kubernetes". Within
properties , reserved should be "true", and kubeEnvironmentProfile.id should be set to the App Service
Kubernetes environment resource ID. An example plan might look like the following:
{
"type": "Microsoft.Web/serverfarms",
"name": "[variables('hostingPlanName')]",
"location": "[parameters('location')]",
"apiVersion": "2020-12-01",
"kind": "linux,kubernetes",
"sku": {
"name": "K1",
"tier": "Kubernetes"
},
"extendedLocation": {
"type": "customlocation",
"name": "[parameters('customLocationId')]"
},
"properties": {
"name": "[variables('hostingPlanName')]",
"location": "[parameters('location')]",
"workerSizeId": "0",
"numberOfWorkers": "1",
"kubeEnvironmentProfile": {
"id": "[parameters('kubeEnvironmentId')]"
},
"reserved": true
}
}
The function app resource should have its kind field set to "functionapp,linux,kubernetes" or
"functionapp,linux,kubernetes,container" depending on if you intend to deploy via code or container. An example
function app might look like the following:
{
"apiVersion": "2018-11-01",
"type": "Microsoft.Web/sites",
"name": "[variables('appName')]",
"kind": "kubernetes,functionapp,linux,container",
"location": "[parameters('location')]",
"extendedLocation": {
"type": "customlocation",
"name": "[parameters('customLocationId')]"
},
"dependsOn": [
"[resourceId('Microsoft.Insights/components', variables('appInsightsName'))]",
"[resourceId('Microsoft.Storage/storageAccounts', variables('storageAccountName'))]",
"[variables('hostingPlanId')]"
],
"properties": {
"serverFarmId": "[variables('hostingPlanId')]",
"siteConfig": {
"linuxFxVersion": "DOCKER|mcr.microsoft.com/azure-functions/dotnet:3.0-appservice-quickstart",
"appSettings": [
{
"name": "FUNCTIONS_EXTENSION_VERSION",
"value": "~3"
},
{
"name": "AzureWebJobsStorage",
"value": "[concat('DefaultEndpointsProtocol=https;AccountName=',
variables('storageAccountName'), ';AccountKey=', listKeys(variables('storageAccountid'),'2015-05-01-
preview').key1)]"
},
{
"name": "APPINSIGHTS_INSTRUMENTATIONKEY",
"value": "[reference(resourceId('microsoft.insights/components/',
variables('appInsightsName')), '2015-05-01').InstrumentationKey]"
}
],
"alwaysOn": true
}
}
}
Customizing a deployment
A function app has many child resources that you can use in your deployment, including app settings and
source control options. You also might choose to remove the sourcecontrols child resource, and use a different
deployment option instead.
IMPORTANT
To successfully deploy your application by using Azure Resource Manager, it's important to understand how resources are
deployed in Azure. In the following example, top-level configurations are applied by using siteConfig . It's important to
set these configurations at a top level, because they convey information to the Functions runtime and deployment
engine. Top-level information is required before the child sourcecontrols/web resource is applied. Although it's possible
to configure these settings in the child-level config/appSettings resource, in some cases your function app must be
deployed before config/appSettings is applied. For example, when you are using functions with Logic Apps, your
functions are a dependency of another resource.
{
"apiVersion": "2015-08-01",
"name": "[parameters('appName')]",
"type": "Microsoft.Web/sites",
"kind": "functionapp",
"location": "[parameters('location')]",
"dependsOn": [
"[resourceId('Microsoft.Storage/storageAccounts', variables('storageAccountName'))]",
"[resourceId('Microsoft.Web/serverfarms', parameters('appName'))]"
],
"properties": {
"serverFarmId": "[variables('appServicePlanName')]",
"siteConfig": {
"alwaysOn": true,
"appSettings": [
{
"name": "FUNCTIONS_EXTENSION_VERSION",
"value": "~3"
},
{
"name": "Project",
"value": "src"
}
]
}
},
"resources": [
{
"apiVersion": "2015-08-01",
"name": "appsettings",
"type": "config",
"dependsOn": [
"[resourceId('Microsoft.Web/Sites', parameters('appName'))]",
"[resourceId('Microsoft.Web/Sites/sourcecontrols', parameters('appName'), 'web')]",
"[resourceId('Microsoft.Storage/storageAccounts', variables('storageAccountName'))]"
],
"properties": {
"AzureWebJobsStorage": "[concat('DefaultEndpointsProtocol=https;AccountName=',
variables('storageAccountName'), ';AccountKey=', listKeys(variables('storageAccountid'),'2019-06-
01').keys[0].value)]",
"AzureWebJobsDashboard": "[concat('DefaultEndpointsProtocol=https;AccountName=',
variables('storageAccountName'), ';AccountKey=', listKeys(variables('storageAccountid'),'2019-06-
01').keys[0].value)]",
"FUNCTIONS_EXTENSION_VERSION": "~3",
"FUNCTIONS_WORKER_RUNTIME": "dotnet",
"Project": "src"
}
},
{
"apiVersion": "2015-08-01",
"name": "web",
"type": "sourcecontrols",
"dependsOn": [
"[resourceId('Microsoft.Web/sites/', parameters('appName'))]"
],
"properties": {
"RepoUrl": "[parameters('sourceCodeRepositoryURL')]",
"branch": "[parameters('sourceCodeBranch')]",
"IsManualIntegration": "[parameters('sourceCodeManualIntegration')]"
}
}
]
}
TIP
This template uses the Project app settings value, which sets the base directory in which the Functions deployment
engine (Kudu) looks for deployable code. In our repository, our functions are in a subfolder of the src folder. So, in the
preceding example, we set the app settings value to src . If your functions are in the root of your repository, or if you
are not deploying from source control, you can remove this app settings value.
[![Deploy to Azure](https://azuredeploy.net/deploybutton.png)]
(https://portal.azure.com/#create/Microsoft.Template/uri/<url-encoded-path-to-azuredeploy-json>)
<a href="https://portal.azure.com/#create/Microsoft.Template/uri/<url-encoded-path-to-azuredeploy-json>"
target="_blank"><img src="https://azuredeploy.net/deploybutton.png"></a>
# Create the parameters for the file, which for this template is the function app name.
$TemplateParams = @{"appName" = "<function-app-name>"}
To test out this deployment, you can use a template like this one that creates a function app on Windows in a
Consumption plan. Replace <function-app-name> with a unique name for your function app.
Next steps
Learn more about how to develop and configure Azure Functions.
Azure Functions developer reference
How to configure Azure function app settings
Create your first Azure function
Manage your function app
8/2/2022 • 12 minutes to read • Edit Online
In Azure Functions, a function app provides the execution context for your individual functions. Function app
behaviors apply to all functions hosted by a given function app. All functions in a function app must be of the
same language.
Individual functions in a function app are deployed together and are scaled together. All functions in the same
function app share resources, per instance, as the function app scales.
Connection strings, environment variables, and other application settings are defined separately for each
function app. Any data that must be shared between function apps should be stored externally in a persisted
store.
You can navigate to everything you need to manage your function app from the overview page, in particular the
Application settings and Platform features .
To find the application settings, see Get started in the Azure portal.
The Application settings tab maintains settings that are used by your function app. You must select Show
values to see the values in the portal. To add a setting in the portal, select New application setting and add
the new key-value pair.
Portal
Azure CLI
Azure PowerShell
To determine the type of plan used by your function app, see App Ser vice plan in the Over view tab for the
function app in the Azure portal. To see the pricing tier, select the name of the App Ser vice Plan , and then
select Proper ties from the left pane.
Plan migration
You can use Azure CLI commands to migrate a function app between a Consumption plan and a Premium plan
on Windows. The specific commands depend on the direction of the migration. Direct migration to a Dedicated
(App Service) plan isn't currently supported.
This migration isn't supported on Linux.
Consumption to Premium
Use the following procedure to migrate from a Consumption plan to a Premium plan on Windows:
1. Run the az functionapp plan create command as follows to create a new App Service plan (Elastic
Premium) in the same region and resource group as your existing function app:
2. Run the az functionapp update command as follows to migrate the existing function app to the new
Premium plan:
3. If you no longer need your previous Consumption function app plan, delete your original function app
plan after confirming you have successfully migrated to the new one. Run the az functionapp plan list
command as follows to get a list of all Consumption plans in your resource group:
You can safely delete the plan with zero sites, which is the one you migrated from.
4. Run the az functionapp plan delete command as follows to delete the Consumption plan you migrated
from.
Premium to Consumption
Use the following procedure to migrate from a Premium plan to a Consumption plan on Windows:
1. Run the az functionapp plan create command as follows to create a new function app (Consumption) in
the same region and resource group as your existing function app. This command also creates a new
Consumption plan in which the function app runs.
2. Run the az functionapp update command as follows to migrate the existing function app to the new
Consumption plan.
3. Run the az functionapp delete command as follows to delete the function app you created in step 1, since
you only need the plan that was created to run the existing function app.
az functionapp delete --name <NEW_CONSUMPTION_APP_NAME> --resource-group <MY_RESOURCE_GROUP>
4. If you no longer need your previous Premium function app plan, delete your original function app plan
after confirming you have successfully migrated to the new one. Please note that if the plan is not
deleted, you will still be charged for the Premium plan. Run the az functionapp plan list command as
follows to get a list of all Premium plans in your resource group.
5. Run the az functionapp plan delete command as follows to delete the Premium plan you migrated from.
Portal
Azure CLI
Azure PowerShell
1. Sign in to the Azure portal, then search for and select Function App .
2. Select the function you want to verify.
3. In the left navigation under Functions , select App keys .
This returns the host keys, which can be used to access any function in the app. It also returns the system
key, which gives anyone administrator-level access to the all function app APIs.
You can also practice least privilege by using the key just for the specific function key by selecting Function
keys under Developer in your HTTP triggered function.
The Functions editor built into the Azure portal lets you update your function code and configuration
(function.json) files directly in the portal.
1. Select your function app, then under Functions select Functions .
2. Choose your function and select Code + test under Developer .
3. Choose your file to edit and select Save when you're done.
Files in the root of the app, such as function.proj or extensions.csproj need to be created and edited by using the
Advanced Tools (Kudu).
1. Select your function app, then under Development tools select Advanced tools > Go .
2. If promoted, sign-in to the SCM site with your Azure credentials.
3. From the Debug console menu, choose CMD .
4. Navigate to .\site\wwwroot , select the plus (+ ) button at the top, and select New file .
5. Name the file, such as extensions.csproj and press Enter.
6. Select the edit button next to the new file, add or update code in the file, and select Save .
7. For a project file like extensions.csproj, run the following command to rebuild the extensions project:
Platform features
Function apps run in, and are maintained by, the Azure App Service platform. As such, your function apps have
access to most of the features of Azure's core web hosting platform. When working in the Azure portal, the left
pane is where you access the many features of the App Service platform that you can use in your function apps.
The following matrix indicates portal feature support by hosting plan and operating system:
Backups X X ✔
The rest of this article focuses on the following features in the portal that are useful for your function apps:
App Service editor
Console
Advanced tools (Kudu)
Deployment options
CORS
Authentication
For more information about how to work with App Service settings, see Configure Azure App Service Settings.
App Service editor
The App Service editor is an advanced in-portal editor that you can use to modify JSON configuration files and
code files alike. Choosing this option launches a separate browser tab with a basic editor. This enables you to
integrate with the Git repository, run and debug code, and modify function app settings. This editor provides an
enhanced development environment for your functions compared with the built-in function editor.
We recommend that you consider developing your functions on your local computer. When you develop locally
and publish to Azure, your project files are read-only in the portal. To learn more, see Code and test Azure
Functions locally.
Console
The in-portal console is an ideal developer tool when you prefer to interact with your function app from the
command line. Common commands include directory and file creation and navigation, as well as executing
batch files and scripts.
When developing locally, we recommend using the Azure Functions Core Tools and the Azure CLI.
Advanced tools (Kudu)
The advanced tools for App Service (also known as Kudu) provide access to advanced administrative features of
your function app. From Kudu, you manage system information, app settings, environment variables, site
extensions, HTTP headers, and server variables. You can also launch Kudu by browsing to the SCM endpoint for
your function app, like https://<myfunctionapp>.scm.azurewebsites.net/
Deployment Center
When you use a source control solution to develop and maintain your functions code, Deployment Center lets
you build and deploy from source control. Your project is built and deployed to Azure when you make updates.
For more information, see Deployment technologies in Azure Functions.
Cross-origin resource sharing
To prevent malicious code execution on the client, modern browsers block requests from web applications to
resources running in a separate domain. Cross-origin resource sharing (CORS) lets an
Access-Control-Allow-Origin header declare which origins are allowed to call endpoints on your function app.
Portal
When you configure the Allowed origins list for your function app, the Access-Control-Allow-Origin header is
automatically added to all responses from HTTP endpoints in your function app.
When the wildcard ( * ) is used, all other domains are ignored.
Use the az functionapp cors add command to add a domain to the allowed origins list. The following example
adds the contoso.com domain:
Use the az functionapp cors show command to list the current allowed origins.
Authentication
When functions use an HTTP trigger, you can require calls to first be authenticated. App Service supports Azure
Active Directory authentication and sign-in with social providers, such as Facebook, Microsoft, and Twitter. For
details on configuring specific authentication providers, see Azure App Service authentication overview.
Next steps
Configure Azure App Service Settings
Continuous deployment for Azure Functions
How to target Azure Functions runtime versions
8/2/2022 • 6 minutes to read • Edit Online
A function app runs on a specific version of the Azure Functions runtime. There are four major versions: 4.x, 3.x,
2.x, and 1.x. By default, function apps are created in version 4.x of the runtime. This article explains how to
configure a function app in Azure to run on the version you choose. For information about how to configure a
local development environment for a specific version, see Code and test Azure Functions locally.
The way that you manually target a specific version depends on whether you're running Windows or Linux.
NOTE
If you pin to a specific major version of Azure Functions, and then try to publish to Azure using Visual Studio, a dialog
window will pop up prompting you to update to the latest version or cancel the publish. To avoid this, add the
<DisableFunctionExtensionVersionUpdate>true</DisableFunctionExtensionVersionUpdate> property in your
.csproj file.
When a new version is publicly available, a prompt in the portal gives you the chance to move up to that
version. After moving to a new version, you can always use the FUNCTIONS_EXTENSION_VERSION application setting
to move back to a previous version.
The following table shows the FUNCTIONS_EXTENSION_VERSION values for each major version to enable automatic
updates:
3.x ~3
2.x ~2
1.x ~1
A change to the runtime version causes a function app to restart.
NOTE
.NET Function apps pinned to ~2.0 opt out of the automatic upgrade to .NET Core 3.1. To learn more, see Functions
v2.x considerations.
IMPORTANT
Although the runtime version is determined by the FUNCTIONS_EXTENSION_VERSION setting, you should only make this
change in the Azure portal and not by changing the setting directly. This is because the portal validates your changes and
makes other related changes as needed.
Portal
Azure CLI
PowerShell
Use the following procedure to view and update the runtime version currently used by a function app.
1. In the Azure portal, browse to your function app.
2. Under Settings , choose Configuration . In the Function runtime settings tab, locate the Runtime
version . Note the specific runtime version. In the example below, the version is set to ~4 .
3. To pin your function app to the version 1.x runtime, choose ~1 under Runtime version . This switch is
disabled when you have functions in your app.
4. When you change the runtime version, go back to the Over view tab and choose Restar t to restart the
app. The function app restarts running on the version 1.x runtime, and the version 1.x templates are used
when you create functions.
The function app restarts after the change is made to the application setting.
Portal
Azure CLI
PowerShell
Viewing and modifying site config settings for function apps isn't supported in the Azure portal. Use the Azure
CLI instead.
The function app restarts after the change is made to the site config.
NOTE
For apps running in a Consumption plan, setting LinuxFxVersion to a specific image may increase cold start times. This
is because pinning to a specific image prevents Functions from using some cold start optimizations.
Next steps
Target the 2.0 runtime in your local development environment
See Release notes for runtime versions
How to disable functions in Azure Functions
8/2/2022 • 5 minutes to read • Edit Online
This article explains how to disable a function in Azure Functions. To disable a function means to make the
runtime ignore the automatic trigger that's defined for the function. This lets you prevent a specific function
from running without stopping the entire function app.
The recommended way to disable a function is with an app setting in the format
AzureWebJobs.<FUNCTION_NAME>.Disabled set to true . You can create and modify this application setting in a
number of ways, including by using the Azure CLI and from your function's Over view tab in the Azure portal.
Disable a function
Portal
Azure CLI
Azure PowerShell
Use the Enable and Disable buttons on the function's Over view page. These buttons work by changing the
value of the AzureWebJobs.<FUNCTION_NAME>.Disabled app setting. This function-specific setting is created the first
time it's disabled.
Even when you publish to your function app from a local project, you can still use the portal to disable functions
in the function app.
NOTE
The portal-integrated testing functionality ignores the Disabled setting. This means that a disabled function still runs
when started from the Test window in the portal.
Functions in a slot
By default, app settings also apply to apps running in deployment slots. You can, however, override the app
setting used by the slot by setting a slot-specific app setting. For example, you might want a function to be active
in production but not during deployment testing, such as a timer triggered function.
To disable a function only in the staging slot:
Portal
Azure CLI
Azure PowerShell
Navigate to the slot instance of your function app by selecting Deployment slots under Deployment ,
choosing your slot, and selecting Functions in the slot instance. Choose your function, then use the Enable and
Disable buttons on the function's Over view page. These buttons work by changing the value of the
AzureWebJobs.<FUNCTION_NAME>.Disabled app setting. This function-specific setting is created the first time it's
disabled.
You can also directly add the app setting named AzureWebJobs.<FUNCTION_NAME>.Disabled with value of true in
the Configuration for the slot instance. When you add a slot-specific app setting, make sure to check the
Deployment slot setting box. This maintains the setting value with the slot during swaps.
To learn more, see Azure Functions Deployment slots.
local.settings.json
Functions can be disabled in the same way when running locally. To disable a function named HttpExample , add
an entry to the Values collection in the local.settings.json file, as follows:
{
"IsEncrypted": false,
"Values": {
"FUNCTIONS_WORKER_RUNTIME": "python",
"AzureWebJobsStorage": "UseDevelopmentStorage=true",
"AzureWebJobs.HttpExample.Disabled": true
}
}
Other methods
While the application setting method is recommended for all languages and all runtime versions, there are
several other ways to disable functions. These methods, which vary by language and runtime version, are
maintained for backward compatibility.
C# class libraries
In a class library function, you can also use the Disable attribute to prevent the function from being triggered.
This attribute lets you customize the name of the setting used to disable the function. Use the version of the
attribute that lets you define a constructor parameter that refers to a Boolean app setting, as shown in the
following example:
public static class QueueFunctions
{
[Disable("MY_TIMER_DISABLED")]
[FunctionName("QueueTrigger")]
public static void QueueTrigger(
[QueueTrigger("myqueue-items")] string myQueueItem,
TraceWriter log)
{
log.Info($"C# function processed: {myQueueItem}");
}
}
This method lets you enable and disable the function by changing the app setting, without recompiling or
redeploying. Changing an app setting causes the function app to restart, so the disabled state change is
recognized immediately.
There's also a constructor for the parameter that doesn't accept a string for the setting name. This version of the
attribute isn't recommended. If you use this version, you must recompile and redeploy the project to change the
function's disabled state.
Functions 1.x - scripting languages
In version 1.x, you can also use the disabled property of the function.json file to tell the runtime not to trigger a
function. This method only works for scripting languages such as C# script and JavaScript. The disabled
property can be set to true or to the name of an app setting:
{
"bindings": [
{
"type": "queueTrigger",
"direction": "in",
"name": "myQueueItem",
"queueName": "myqueue-items",
"connection":"MyStorageConnectionAppSetting"
}
],
"disabled": true
}
or
"bindings": [
...
],
"disabled": "IS_DISABLED"
In the second example, the function is disabled when there is an app setting that is named IS_DISABLED and is
set to true or 1.
IMPORTANT
The portal uses application settings to disable v1.x functions. When an application setting conflicts with the function.json
file, an error can occur. You should remove the disabled property from the function.json file to prevent errors.
Considerations
Keep the following considerations in mind when you disable functions:
When you disable an HTTP triggered function by using the methods described in this article, the endpoint
may still by accessible when running on your local computer.
At this time, function names that contain a hyphen ( - ) can't be disabled when running on Linux plan. If
you need to disable your functions when running on Linux plan, don't use hyphens in your function
names.
Next steps
This article is about disabling automatic triggers. For more information about triggers, see Triggers and
bindings.
How to configure Azure Functions with a virtual
network
8/2/2022 • 2 minutes to read • Edit Online
This article shows you how to perform tasks related to configuring your function app to connect to and run on a
virtual network. For an in-depth tutorial on how to secure your storage account, please refer to the Connect to a
Virtual Network tutorial. To learn more about Azure Functions and networking, see Azure Functions networking
options.
NOTE
This feature currently works for all Windows and Linux virtual network-supported SKUs in the Dedicated (App Service)
plan and for Windows Elastic Premium plans. Consumption tier isn't supported.
SET T IN G N A M E VA L UE C O M M EN T
9. Select Save to save the application settings. Changing app settings causes the app to restart.
After the function app restarts, it's now connected to a secured storage account.
Next steps
Azure Functions networking options
Azure Functions geo-disaster recovery
8/2/2022 • 4 minutes to read • Edit Online
When entire Azure regions or datacenters experience downtime, your mission-critical code needs to continue
processing in a different region. This article explains some of the strategies that you can use to deploy functions
to allow for disaster recovery.
Basic concepts
Azure Functions run in a function app in a specific region. There's no built-in redundancy available. To avoid loss
of execution during outages, you can redundantly deploy the same functions to function apps in multiple
regions.
When you run the same function code in multiple regions, there are two patterns to consider:
To learn more about multi-region deployments, see the guidance in Highly available multi-region web
application.
Next steps
Create Azure Front Door
Event Hubs failover considerations
Move your function app between regions in Azure
Functions
8/2/2022 • 3 minutes to read • Edit Online
This article describes how to move Azure Functions resources to a different Azure region. You might move your
resources to another region for one of the following reasons:
Take advantage of a new Azure region
Deploy features or services that are available only in specific regions
Meet internal policy and governance requirements
Respond to capacity planning requirements
Azure Functions resources are region-specific and can't be moved across regions. You must create a copy of
your existing function app resources in the target region, then redeploy your functions code over to the new
app.
If minimal downtime is a requirement, consider running your function app in both regions to implement a
disaster recovery architecture:
Azure Functions geo-disaster recovery
Disaster recovery and geo-distribution in Azure Durable Functions
Prerequisites
Make sure that the target region supports Azure Functions and any related service whose resources you
want to move
Have access to the original source code for the functions you're migrating
Prepare
Identify all the function app resources used on the source region, which may include the following:
Function app
Hosting plan
Deployment slots
Custom domains purchased in Azure
TLS/SSL certificates and settings
Configured networking options
Managed identities
Configured application settings - users with the enough access can copy all the source application settings by
using the Advanced Edit feature in the portal
Scaling configurations
Your functions may connect to other resources by using triggers or bindings. For information on how to move
those resources across regions, see the documentation for the respective services.
You should be able to also export a template from existing resources.
Move
Deploy the function app to the target region and review the configured resources.
Redeploy function app
If you have access to the deployment and automation resources that created the function app in the source
region, re-run the same deployment steps in the target region to create and redeploy your app.
If you only have access to the source code but not the deployment and automation resources you can deploy
and configure the function app on the target region using any of the available deployment technologies or using
one of the continuous deployment methods.
Review configured resources
Review and configure the resources identified in the Prepare step above in the target region if they weren't
configured during the deploy.
Move considerations
If your deployment resources and automation doesn't create a function app, create an app of the same type
in a new hosting plan in the target region
Function app names are globally unique in Azure, so the app in the target region can't have the same name
as the one in the source region
References and application settings that connect your function app to dependencies need to be reviewed and,
when needed, updated. For example, when you move a database that your functions call, you must also
update the application settings or configuration to connect to the database in the target region. Some
application settings such as the Application Insights instrumentation key or the Azure storage account used
by the function app can be already be configured on the target region and do not need to be updated
Remember to verify your configuration and test your functions in the target region
If you had custom domain configured, remap the domain name
For Functions running on Dedicated plans also review the App Service Migration Plan in case the plan is
shared with web apps
Next steps
Review the Azure Architecture Center for examples of Azure Functions running in multiple regions as part of
more advanced solution architectures
Monitoring Azure Functions
8/2/2022 • 7 minutes to read • Edit Online
When you have critical applications and business processes relying on Azure resources, you want to monitor
those resources for their availability, performance, and operation.
This article describes the monitoring data generated by apps hosted in Azure Functions. Azure Functions uses
Azure Monitor to monitor the health of your function apps. If you're unfamiliar with the features of Azure
Monitor common to all Azure services that use it, see Monitoring Azure resources with Azure Monitor.
Azure Functions uses Application Insights to collect and analyze log data from individual function executions in
your function app. For more information, see Monitor functions in Azure.
Monitoring data
Azure Functions collects the same kinds of monitoring data as other Azure resources that are described in Azure
Monitor data collection.
See Monitoring Azure Functions data reference for detailed information on the metrics and logs metrics created
by Azure Functions.
Analyzing metrics
You can analyze metrics for Azure Functions with metrics from other Azure services using metrics explorer by
opening Metrics from the Azure Monitor menu. See Getting started with Azure Metrics Explorer for details on
using this tool.
For a list of the platform metrics collected for Azure Functions, see Monitoring Azure Functions data reference
metrics.
For reference, you can see a list of all resource metrics supported in Azure Monitor.
The following examples use Monitor Metrics to help estimate the cost of running your function app on a
Consumption plan. To learn more about estimating Consumption plan costs, see Estimating Consumption plan
costs.
Portal
Azure CLI
Azure PowerShell
Use Azure Monitor metrics explorer to view cost-related data for your Consumption plan function apps in a
graphical format.
1. In the Azure portal, navigate to your function app.
2. In the left panel, scroll down to Monitoring and choose Metrics .
3. From Metric , choose Function Execution Count and Sum for Aggregation . This adds the sum of the
execution counts during chosen period to the chart.
4. Select Add metric and repeat steps 2-4 to add Function Execution Units to the chart.
The resulting chart contains the totals for both execution metrics in the chosen time range, which in this case is
two hours.
As the number of execution units is so much greater than the execution count, the chart just shows execution
units.
This chart shows a total of 1.11 billion Function Execution Units consumed in a two-hour period, measured in
MB-milliseconds. To convert to GB-seconds, divide by 1024000. In this example, the function app consumed
1110000000 / 1024000 = 1083.98 GB-seconds. You can take this value and multiply by the current price of
execution time on the Functions pricing page, which gives you the cost of these two hours, assuming you've
already used any free grants of execution time.
Analyzing logs
Data in Azure Monitor Logs is stored in tables where each table has its own set of unique properties.
All resource logs in Azure Monitor have the same fields followed by service-specific fields. The common schema
is outlined in Azure Monitor resource log schema.
The Activity log is a type of platform log in Azure that provides insight into subscription-level events. You can
view it independently or route it to Azure Monitor Logs, where you can do much more complex queries using
Log Analytics.
For a list of the types of resource logs collected for Azure Functions, see Monitoring Azure Functions data
reference
For a list of the tables used by Azure Monitor Logs and queryable by Log Analytics, see Monitoring Azure
Functions data reference
Sample Kusto queries
IMPORTANT
When you select Logs from the Azure Functions menu, Log Analytics is opened with the query scope set to the current
resource. This means that log queries will only include data from that resource. If you want to run a query that includes
data from other resources or data from other Azure services, select Logs from the Azure Monitor menu. See Log query
scope and time range in Azure Monitor Log Analytics for details.
Following are queries that you can use to help you monitor your Azure Function.
The following sample query can help you monitor all your functions app logs:
FunctionAppLogs
| project TimeGenerated, HostInstanceId, Message, _ResourceId
| order by TimeGenerated desc
The following sample query can help you monitor a specific functions app's logs:
FunctionAppLogs
| where FunctionName == "<Function name>"
| order by TimeGenerated desc
The following sample query can help you monitor exceptions on a specific functions app's logs:
FunctionAppLogs
| where ExceptionDetails != ""
| where FunctionName == "<Function name>"
| order by TimeGenerated desc
Alerts
Azure Monitor alerts proactively notify you when important conditions are found in your monitoring data. They
allow you to identify and address issues in your system before your customers notice them. You can set alerts
on metrics, logs, and the activity log. Different types of alerts have benefits and drawbacks.
If you're creating or running an application that run on Functions Azure Monitor Application Insights may offer
other types of alerts.
The following table lists common and recommended alert rules for Functions.
Metric HTTP Server Errors When HTTP 5xx errors exceed a set
value
Activity Log Create or Update Web App When app is created or updated
Next steps
For more information about monitoring Azure Functions, see the following articles:
Monitor Azure Functions - details how-to monitor a function app.
Monitoring Azure Functions data reference - reference of the metrics, logs, and other important values
created by your function app.
Monitoring Azure resources with Azure Monitor - details monitoring Azure resources.
Analyze Azure Functions telemetry in Application Insights - details how-to view and query the data being
collected from a function app.
How to configure monitoring for Azure Functions
8/2/2022 • 18 minutes to read • Edit Online
Azure Functions integrates with Application Insights to better enable you to monitor your function apps.
Application Insights, a feature of Azure Monitor, is an extensible Application Performance Management (APM)
service that collects data generated by your function app, including information your app writes to logs.
Application Insights integration is typically enabled when your function app is created. If your app doesn't have
the instrumentation key set, you must first enable Application Insights integration.
You can use Application Insights without any custom configuration. The default configuration can result in high
volumes of data. If you're using a Visual Studio Azure subscription, you might hit your data cap for Application
Insights. For information about Application Insights costs, see Application Insights billing. For more information,
see Solutions with high-volume of telemetry.
Later in this article, you learn how to configure and customize the data that your functions send to Application
Insights. For a function app, logging is configured in the host.json file.
NOTE
You can use specially configured application settings to represent specific settings in a host.json file for a specific
environment. This lets you effectively change host.json settings without having to republish the host.json file in your
project. For more information, see Override host.json values.
Configure categories
The Azure Functions logger includes a category for every log. The category indicates which part of the runtime
code or your function code wrote the log. Categories differ between version 1.x and later versions. The following
chart describes the main categories of logs that the runtime creates:
v2.x+
v1.x
NOTE
For .NET class library functions, these categories assume you're using ILogger and not ILogger<T> . For more
information, see the Functions ILogger documentation.
The Table column indicates to which table in Application Insights the log is written.
The host.json file configuration determines how much logging a functions app sends to Application Insights.
For each category, you indicate the minimum log level to send. The host.json settings vary depending on the
Functions runtime version.
The example below defines logging based on the following rules:
For logs of Host.Results or Function , only log events at Error or a higher level.
For logs of Host.Aggregator , log all generated metrics ( Trace ).
For all other logs, including user logs, log only Information level and higher events.
v2.x+
v1.x
{
"logging": {
"fileLoggingMode": "always",
"logLevel": {
"default": "Information",
"Host.Results": "Error",
"Function": "Error",
"Host.Aggregator": "Trace"
}
}
}
If host.json includes multiple logs that start with the same string, the more defined logs ones are matched first.
Consider the following example that logs everything in the runtime, except Host.Aggregator , at the Error level:
v2.x+
v1.x
{
"logging": {
"fileLoggingMode": "always",
"logLevel": {
"default": "Information",
"Host": "Error",
"Function": "Error",
"Host.Aggregator": "Information"
}
}
}
You can use a log level setting of None to prevent any logs from being written for a category.
Cau t i on
Azure Functions integrates with Application Insights by storing telemetry events in Application Insights tables.
Setting a category log level to any value different from Information will prevent the telemetry to flow to those
tables. As outcome, you won't be able to see the related data in Application Insights or Function Monitor
tab.
From above samples:
If the Host.Results category is set to Error log level, it will only gather host execution telemetry events in
the requests table for failed function executions, preventing to display host execution details of success
executions in both the Application Insights and Function Monitor tab.
If the Function category is set to Error log level, it will stop gathering function telemetry data related to
dependencies , customMetrics , and customEvents for all the functions, preventing to see any of this data in
Application Insights. It will only gather traces logged with Error level.
In both cases you will continue to collect errors and exceptions data in the Application Insights and Function
Monitor tab. For more information, see Solutions with high-volume of telemetry.
{
"aggregator": {
"batchSize": 1000,
"flushTimeout": "00:00:30"
}
}
Configure sampling
Application Insights has a sampling feature that can protect you from producing too much telemetry data on
completed executions at times of peak load. When the rate of incoming executions exceeds a specified threshold,
Application Insights starts to randomly ignore some of the incoming executions. The default setting for
maximum number of executions per second is 20 (five in version 1.x). You can configure sampling in host.json.
Here's an example:
v2.x+
v1.x
{
"logging": {
"applicationInsights": {
"samplingSettings": {
"isEnabled": true,
"maxTelemetryItemsPerSecond" : 20,
"excludedTypes": "Request;Exception"
}
}
}
}
You can exclude certain types of telemetry from sampling. In this example, data of type Request and Exception
is excluded from sampling. It will ensure that all function executions (requests) and exceptions are logged while
other types of telemetry remain subject to sampling.
For more information, see Sampling in Application Insights.
<DESTINATION> The destination to which logs are sent. Valid values are
AppInsights and Blob .
When you use AppInsights , ensure that the Application
Insights is enabled in your function app.
When you set the destination to Blob , logs are created in a
blob container named
azure-functions-scale-controller in the default storage
account set in the AzureWebJobsStorage application
setting.
TIP
Keep in mind that while you leave scale controller logging enabled, it impacts the potential costs of monitoring your
function app. Consider enabling logging until you have collected enough data to understand how the scale controller is
behaving, and then disabling it.
For example, the following Azure CLI command turns on verbose logging from the scale controller to
Application Insights:
In this example, replace <FUNCTION_APP_NAME> and <RESOURCE_GROUP_NAME> with the name of your function app
and the resource group name, respectively.
The following Azure CLI command disables logging by setting the verbosity to None :
You can also disable logging by removing the SCALE_CONTROLLER_LOGGING_ENABLED setting using the following
Azure CLI command:
With scale controller logging enabled, you're now able to query your scale controller logs.
3. Expand Change your resource and create an Application Insights resource by using the settings
specified in the following table:
New resource name Unique app name It's easiest to use the same name as
your function app, which must be
unique in your subscription.
NOTE
Early versions of Functions used built-in monitoring, which is no longer recommended. When you're enabling Application
Insights integration for such a function app, you must also disable built-in logging.
The following screenshot shows Host.Aggregator telemetry data in Application Insights customMetrics
table:
Host.Results categor y : As described in configure categories, this category provides the runtime-
generated logs indicating the success or failure of a function invocation. The information from this
category is gathered in the Application Insights requests table, and it's shown in the function Monitor
tab and in different Application Insights dashboards (Performance, Failures, and so on). If you set this
category to other value different than Information , you'll only gather telemetry generated at the log
level defined (or higher). For example, setting it to error results in tracking requests data only for failed
executions.
The following screenshot shows the Host.Results telemetry data displayed in the function Monitor tab:
The following screenshot shows Host.Results telemetry data displayed in Application Insights
Performance dashboard:
Host.Aggregator vs Host.Results : Both categories provide good insights about function executions. If
needed, you can remove the detailed information from one of these categories, so that you can use the
other for monitoring and alerting. Here's a sample:
v2.x+
v1.x
{
"version": "2.0",
"logging": {
"logLevel": {
"default": "Warning",
"Function": "Error",
"Host.Aggregator": "Error",
"Host.Results": "Information",
"Function.Function1": "Information",
"Function.Function1.User": "Error"
},
"applicationInsights": {
"samplingSettings": {
"isEnabled": true,
"maxTelemetryItemsPerSecond": 1,
"excludedTypes": "Exception"
}
}
}
}
NOTE
Configuration per function isn't supported in v1.x.
Sampling is configured to send one telemetry item per second per type, excluding the exceptions. This
sampling will happen for each server host running our function app. So, if we have four instances, this
configuration will emit four telemetry items per second per type and all the exceptions that might occur.
NOTE
Metric counts such as request rate and exception rate are adjusted to compensate for the sampling rate, so that
they show approximately correct values in Metric Explorer.
TIP
Experiment with different configurations to ensure that you cover your requirements for logging, monitoring, and
alerting. Also, ensure that you have detailed diagnostics in case of unexpected errors or malfunctioning.
logging.logLevel.default AzureFunctionsJobHost__logging__logLevel__default
logging.logLevel.Host.Aggregator AzureFunctionsJobHost__logging__logLevel__Host__Aggrega
tor
logging.logLevel.Function AzureFunctionsJobHost__logging__logLevel__Function
logging.logLevel.Function.Function1 AzureFunctionsJobHost__logging__logLevel__Function.Functi
on1
logging.logLevel.Function.Function1.User AzureFunctionsJobHost__logging__logLevel__Function.Functi
on1.User
You can override the settings directly at the Azure portal Function App Configuration blade or by using an Azure
CLI or PowerShell script.
az cli
PowerShell
NOTE
Overriding the host.json through changing app settings will restart your function app.
Next steps
For more information about monitoring, see:
Monitor Azure Functions
Analyze Azure Functions telemetry data in Application Insights
Application Insights
Analyze Azure Functions telemetry in Application
Insights
8/2/2022 • 6 minutes to read • Edit Online
Azure Functions integrates with Application Insights to better enable you to monitor your function apps.
Application Insights collects telemetry data generated by your function app, including information your app
writes to logs. Application Insights integration is typically enabled when your function app is created. If your
function app doesn't have the instrumentation key set, you must first enable Application Insights integration.
By default, the data collected from your function app is stored in Application Insights. In the Azure portal,
Application Insights provides an extensive set of visualizations of your telemetry data. You can drill into error
logs and query events and metrics. This article provides basic examples of how to view and query your collected
data. To learn more about exploring your function app data in Application Insights, see What is Application
Insights?.
To be able to view Application Insights data from a function app, you must have at least Contributor role
permissions on the function app. You also need to have the the Monitoring Reader permission on the
Application Insights instance. You have these permissions by default for any function app and Application
Insights instance that you create.
To learn more about data retention and potential storage costs, see Data collection, retention, and storage in
Application Insights.
NOTE
It can take up to five minutes for the list to appear while the telemetry client batches data for transmission to the
server. The delay doesn't apply to the Live Metrics Stream. That service connects to the Functions host when you
load the page, so logs are streamed directly to the page.
2. To see the logs for a particular function invocation, select the Date (UTC) column link for that invocation.
The logging output for that invocation appears in a new page.
3. Choose Run in Application Insights to view the source of the query that retrieves the Azure Monitor
log data in Azure Log. If this is your first time using Azure Log Analytics in your subscription, you're asked
to enable it.
4. After you enable Log Analytics, the following query is displayed. You can see that the query results are
limited to the last 30 days ( where timestamp > ago(30d) ), and the results show no more than 20 rows (
take 20 ). In contrast, the invocation details list for your function is for the last 30 days with no limit.
For more information, see Query telemetry data later in this article.
For information about how to use Application Insights, see the Application Insights documentation. This section
shows some examples of how to view data in Application Insights. If you're already familiar with Application
Insights, you can go directly to the sections about how to configure and customize the telemetry data.
The following areas of Application Insights can be helpful when evaluating the behavior, performance, and
errors in your functions:
Metrics Create charts and alerts that are based on metrics. Metrics
include the number of function invocations, execution time,
and success rates.
Live Metrics View metrics data as it's created in near real time.
Here's a query example that shows the distribution of requests per worker over the last 30 minutes.
requests
| where timestamp > ago(30m)
| summarize count() by cloud_RoleInstance, bin(timestamp, 1m)
| render timechart
The tables that are available are shown in the Schema tab on the left. You can find data generated by function
invocations in the following tables:
TA B L E DESC RIP T IO N
The other tables are for availability tests, and client and browser telemetry. You can implement custom telemetry
to add data to them.
Within each table, some of the Functions-specific data is in a customDimensions field. For example, the following
query retrieves all traces that have log level Error .
traces
| where customDimensions.LogLevel == "Error"
The runtime provides the customDimensions.LogLevel and customDimensions.Category fields. You can provide
additional fields in logs that you write in your function code. For an example in C#, see Structured logging in the
.NET class library developer guide.
traces
| extend CustomDimensions = todynamic(tostring(customDimensions))
| where CustomDimensions.Category == "ScaleControllerLogs"
The following query expands on the previous query to show how to get only logs indicating a change in scale:
traces
| extend CustomDimensions = todynamic(tostring(customDimensions))
| where CustomDimensions.Category == "ScaleControllerLogs"
| where message == "Instance count changed"
| extend Reason = CustomDimensions.Reason
| extend PreviousInstanceCount = CustomDimensions.PreviousInstanceCount
| extend NewInstanceCount = CustomDimensions.CurrentInstanceCount
performanceCounters
| where name == "Private Bytes"
| project timestamp, name, value
Determine duration
Azure Monitor tracks metrics at the resource level, which for Functions is the function app. Application Insights
integration emits metrics on a per-function basis. Here's an example analytics query to get the average duration
of a function:
customMetrics
| where name contains "Duration"
| extend averageDuration = valueSum / valueCount
| summarize averageDurationMilliseconds=avg(averageDuration) by name
NAME AVERA GEDURAT IO N M IL L ISEC O N DS
Next steps
Learn more about monitoring Azure Functions:
Monitor Azure Functions
How to configure monitoring for Azure Functions
Enable streaming execution logs in Azure Functions
8/2/2022 • 3 minutes to read • Edit Online
While developing an application, you often want to see what's being written to the logs in near real time when
running in Azure.
There are two ways to view a stream of log files being generated by your function executions.
Built-in log streaming : the App Service platform lets you view a stream of your application log files.
This is equivalent to the output seen when you debug your functions during local development and when
you use the Test tab in the portal. All log-based information is displayed. For more information, see
Stream logs. This streaming method supports only a single instance, and can't be used with an app
running on Linux in a Consumption plan.
Live Metrics Stream : when your function app is connected to Application Insights, you can view log
data and other metrics in near real-time in the Azure portal using Live Metrics Stream. Use this method
when monitoring functions running on multiple-instances or on Linux in a Consumption plan. This
method uses sampled data.
Log streams can be viewed both in the portal and in most local development environments.
Portal
You can view both types of log streams in the portal.
Built-in log streaming
To view streaming logs in the portal, select the Platform features tab in your function app. Then, under
Monitoring , choose Log streaming .
This connects your app to the log streaming service and application logs are displayed in the window. You can
toggle between Application logs and Web ser ver logs .
Live Metrics Stream
To view the Live Metrics Stream for your app, select the Over view tab of your function app. When you have
Application Insights enabled, you see an Application Insights link under Configured features . This link takes
you to the Application Insights page for your app.
In Application Insights, select Live Metrics Stream . Sampled log entries are displayed under Sample
Telemetr y .
Visual Studio Code
To turn on the streaming logs for your function app in Azure:
1. Select F1 to open the command palette, and then search for and run the command Azure Functions:
Star t Streaming Logs .
2. Select your function app in Azure, and then select Yes to enable application logging for the function app.
3. Trigger your functions in Azure. Notice that log data is displayed in the Output window in Visual Studio
Code.
4. When you're done, remember to run the command Azure Functions: Stop Streaming Logs to disable
logging for the function app.
Core Tools
Built-in log streaming
Use the func azure functionapp logstream command to start receiving streaming logs of a specific function app
running in Azure, as in the following example:
NOTE
Built-in log streaming isn't yet enabled in Core Tools for function apps running on Linux in a Consumption plan. For these
hosting plans, you instead need to use Live Metrics Stream to view the logs in near-real time.
Azure CLI
You can enable streaming logs by using the Azure CLI. Use the following commands to sign in, choose your
subscription, and stream log files:
az login
az account list
az account set --subscription <subscriptionNameOrId>
az webapp log tail --resource-group <RESOURCE_GROUP_NAME> --name <FUNCTION_APP_NAME>
Azure PowerShell
You can enable streaming logs by using Azure PowerShell. For PowerShell, use the Set-AzWebApp command to
enable logging on the function app, as shown in the following snippet:
# Enable Logs
Set-AzWebApp -RequestTracingEnabled $True -HttpLoggingEnabled $True -DetailedErrorLoggingEnabled $True -
ResourceGroupName $ResourceGroupName -Name $AppName
Next steps
Monitor Azure Functions
Analyze Azure Functions telemetry in Application Insights
Monitoring Azure Functions with Azure Monitor
Logs
8/2/2022 • 2 minutes to read • Edit Online
Azure Functions offers an integration with Azure Monitor Logs to monitor functions. This article shows you how
to configure Azure Functions to send system-generated and user-generated logs to Azure Monitor Logs.
Azure Monitor Logs gives you the ability to consolidate logs from different resources in the same workspace,
where it can be analyzed with queries to quickly retrieve, consolidate, and analyze collected data. You can create
and test queries using Log Analytics in the Azure portal and then either directly analyze the data using these
tools or save queries for use with visualizations or alert rules.
Azure Monitor uses a version of the Kusto query language used by Azure Data Explorer that is suitable for
simple log queries but also includes advanced functionality such as aggregations, joins, and smart analytics. You
can quickly learn the query language using multiple lessons.
NOTE
Integration with Azure Monitor Logs is currently in public preview. Not supported for function apps running on version
1.x.
Setting up
1. From the Monitoring section of your function app in the Azure portal, select Diagnostic settings , and
then select Add diagnostic setting .
2. In the Diagnostics settings page, under Categor y details and log , choose FunctionAppLogs .
The FunctionAppLogs table contains the desired logs.
3. Under Destination details , choose Send to Log Analytics and then select your Log Analytics
workspace .
4. Enter a Diagnostic settings name , and then select Save .
User-generated logs
To generate custom logs, use the logging statement specific to your language. Here are sample code snippets:
C#
Java
JavaScript
PowerShell
Python
FunctionAppLogs
| order by TimeGenerated desc
FunctionAppLogs
| where FunctionName == "<Function name>"
Exceptions
FunctionAppLogs
| where ExceptionDetails != ""
| order by TimeGenerated asc
Next steps
Review the Azure Functions overview.
Learn more about Azure Monitor Logs.
Learn more about the query language.
Configure your App Service or Azure Functions app
to use Azure AD login
8/2/2022 • 10 minutes to read • Edit Online
This article shows you how to configure authentication for Azure App Service or Azure Functions so that your
app signs in users with the Microsoft identity platform (Azure AD) as the authentication provider.
The App Service Authentication feature can automatically create an app registration with the Microsoft identity
platform. You can also use a registration that you or a directory admin creates separately.
Create a new app registration automatically
Use an existing registration created separately
NOTE
The option to create a new registration is not available for government clouds. Instead, define a registration separately.
Client Secret Use the client secret you generated in the app
registration. With a client secret, hybrid flow is used and
the App Service will return access and refresh tokens.
When the client secret is not set, implicit flow is used and
only an ID token is returned. These tokens are sent by
the provider and stored in the EasyAuth token store.
NOTE
For a Microsoft Store application, use the package SID as the URI instead.
4. Select Create .
5. After the app registration is created, copy the value of Application (client) ID .
6. Select API permissions > Add a permission > My APIs .
7. Select the app registration you created earlier for your App Service app. If you don't see the app
registration, make sure that you've added the user_impersonation scope in Create an app registration
in Azure AD for your App Service app.
8. Under Delegated permissions , select user_impersonation , and then select Add permissions .
You have now configured a native client application that can request access your App Service app on behalf of a
user.
Daemon client application (service -to -service calls)
Your application can acquire a token to call a Web API hosted in your App Service or Function app on behalf of
itself (not on behalf of a user). This scenario is useful for non-interactive daemon applications that perform tasks
without a logged in user. It uses the standard OAuth 2.0 client credentials grant.
1. In the Azure portal, select Active Director y > App registrations > New registration .
2. In the Register an application page, enter a Name for your daemon app registration.
3. For a daemon application, you don't need a Redirect URI so you can keep that empty.
4. Select Create .
5. After the app registration is created, copy the value of Application (client) ID .
6. Select Cer tificates & secrets > New client secret > Add . Copy the client secret value shown in the page.
It won't be shown again.
You can now request an access token using the client ID and client secret by setting the resource parameter to
the Application ID URI of the target app. The resulting access token can then be presented to the target app
using the standard OAuth 2.0 Authorization header, and App Service Authentication / Authorization will validate
and use the token as usual to now indicate that the caller (an application in this case, not a user) is authenticated.
At present, this allows any client application in your Azure AD tenant to request an access token and
authenticate to the target app. If you also want to enforce authorization to allow only certain client applications,
you must perform some additional configuration.
1. Define an App Role in the manifest of the app registration representing the App Service or Function app you
want to protect.
2. On the app registration representing the client that needs to be authorized, select API permissions > Add a
permission > My APIs .
3. Select the app registration you created earlier. If you don't see the app registration, make sure that you've
added an App Role.
4. Under Application permissions , select the App Role you created earlier, and then select Add
permissions .
5. Make sure to click Grant admin consent to authorize the client application to request the permission.
6. Similar to the previous scenario (before any roles were added), you can now request an access token for the
same target resource , and the access token will include a roles claim containing the App Roles that were
authorized for the client application.
7. Within the target App Service or Function app code, you can now validate that the expected roles are present
in the token (this is not performed by App Service Authentication / Authorization). For more information, see
Access user claims.
You have now configured a daemon client application that can access your App Service app using its own
identity.
Best practices
Regardless of the configuration you use to set up authentication, the following best practices will keep your
tenant and applications more secure:
Give each App Service app its own permissions and consent.
Configure each App Service app with its own registration.
Avoid permission sharing between environments by using separate app registrations for separate
deployment slots. When testing new code, this practice can help prevent issues from affecting the production
app.
Next steps
App Service Authentication / Authorization overview.
Tutorial: Authenticate and authorize users end-to-end in Azure App Service
Tutorial: Authenticate and authorize users in a web app that accesses Azure Storage and Microsoft Graph
Tutorial: Authenticate and authorize users end-to-end in Azure App Service
Configure your App Service or Azure Functions app
to use Facebook login
8/2/2022 • 2 minutes to read • Edit Online
This article shows how to configure Azure App Service or Azure Functions to use Facebook as an authentication
provider.
To complete the procedure in this article, you need a Facebook account that has a verified email address and a
mobile phone number. To create a new Facebook account, go to facebook.com.
IMPORTANT
The app secret is an important security credential. Do not share this secret with anyone or distribute it within a
client application.
10. The Facebook account that you used to register the application is an administrator of the app. At this
point, only administrators can sign in to this application.
To authenticate other Facebook accounts, select App Review and enable Make <your-app-name>
public to enable the general public to access the app by using Facebook authentication.
Add Facebook information to your application
1. Sign in to the Azure portal and navigate to your app.
2. Select Authentication in the menu on the left. Click Add identity provider .
3. Select Facebook in the identity provider dropdown. Paste in the App ID and App Secret values that you
obtained previously.
The secret will be stored as a slot-sticky application setting named
FACEBOOK_PROVIDER_AUTHENTICATION_SECRET . You can update that setting later to use Key Vault references if
you wish to manage the secret in Azure Key Vault.
4. If this is the first identity provider configured for the application, you will also be prompted with an App
Ser vice authentication settings section. Otherwise, you may move on to the next step.
These options determine how your application responds to unauthenticated requests, and the default
selections will redirect all requests to log in with this new provider. You can change customize this
behavior now or adjust these settings later from the main Authentication screen by choosing Edit next
to Authentication settings . To learn more about these options, see Authentication flow.
5. (Optional) Click Next: Scopes and add any scopes needed by the application. These will be requested at
login time for browser-based flows.
6. Click Add .
You're now ready to use Facebook for authentication in your app. The provider will be listed on the
Authentication screen. From there, you can edit or delete this provider configuration.
Next steps
App Service Authentication / Authorization overview.
Tutorial: Authenticate and authorize users end-to-end in Azure App Service
Configure your App Service or Azure Functions app
to use GitHub login
8/2/2022 • 2 minutes to read • Edit Online
This article shows how to configure Azure App Service or Azure Functions to use GitHub as an authentication
provider.
To complete the procedure in this article, you need a GitHub account. To create a new GitHub account, go to
GitHub.
IMPORTANT
The client secret is an important security credential. Do not share this secret with anyone or distribute it with your
app.
This topic shows you how to configure Azure App Service or Azure Functions to use Google as an authentication
provider.
To complete the procedure in this topic, you must have a Google account that has a verified email address. To
create a new Google account, go to accounts.google.com.
IMPORTANT
The App secret is an important security credential. Do not share this secret with anyone or distribute it within a
client application.
[Note] > For adding scope: You can define what permissions your application has in the provider's
registration portal. The app can request scopes at login time which leverage these permissions.
You are now ready to use Google for authentication in your app. The provider will be listed on the
Authentication screen. From there, you can edit or delete this provider configuration.
Next steps
App Service Authentication / Authorization overview.
Tutorial: Authenticate and authorize users end-to-end in Azure App Service
Configure your App Service or Azure Functions app
to use Twitter login
8/2/2022 • 2 minutes to read • Edit Online
This article shows how to configure Azure App Service or Azure Functions to use Twitter as an authentication
provider.
To complete the procedure in this article, you need a Twitter account that has a verified email address and phone
number. To create a new Twitter account, go to twitter.com.
4. At the bottom of the page, type at least 100 characters in Tell us how this app will be used , then
select Create . Click Create again in the pop-up. The application details are displayed.
5. Select the Keys and Access Tokens tab.
Make a note of these values:
API key
API secret key
IMPORTANT
The API secret key is an important security credential. Do not share this secret with anyone or distribute it with
your app.
Next steps
App Service Authentication / Authorization overview.
Tutorial: Authenticate and authorize users end-to-end in Azure App Service
Configure your App Service or Azure Functions app
to login using an OpenID Connect provider
8/2/2022 • 2 minutes to read • Edit Online
This article shows you how to configure Azure App Service or Azure Functions to use a custom authentication
provider that adheres to the OpenID Connect specification. OpenID Connect (OIDC) is an industry standard used
by many identity providers (IDPs). You do not need to understand the details of the specification in order to
configure your app to use an adherent IDP.
You can configure your app to use one or more OIDC providers. Each must be given a unique alphanumeric
name in the configuration, and only one can serve as the default redirect target.
NOTE
Some providers may require additional steps for their configuration and how to use the values they provide. For example,
Apple provides a private key which is not itself used as the OIDC client secret, and you instead must use it craft a JWT
which is treated as the secret you provide in your app config (see the "Creating the Client Secret" section of the Sign in
with Apple documentation)
You will need to collect a client ID and client secret for your application.
IMPORTANT
The client secret is an important security credential. Do not share this secret with anyone or distribute it within a client
application.
Additionally, you will need the OpenID Connect metadata for the provider. This is often exposed via a
configuration metadata document, which is the provider's Issuer URL suffixed with
/.well-known/openid-configuration . Gather this configuration URL.
If you are unable to use a configuration metadata document, you will need to gather the following values
separately:
The issuer URL (sometimes shown as issuer )
The OAuth 2.0 Authorization endpoint (sometimes shown as authorization_endpoint )
The OAuth 2.0 Token endpoint (sometimes shown as token_endpoint )
The URL of the OAuth 2.0 JSON Web Key Set document (sometimes shown as jwks_uri )
Next steps
App Service Authentication / Authorization overview.
Tutorial: Authenticate and authorize users end-to-end in Azure App Service
Configure your App Service or Azure Functions app
to sign in using a Sign in with Apple provider
(Preview)
8/2/2022 • 6 minutes to read • Edit Online
This article shows you how to configure Azure App Service or Azure Functions to use Sign in with Apple as an
authentication provider.
To complete the procedure in this article, you must have enrolled in the Apple developer program. To enroll in
the Apple developer program, go to developer.apple.com/programs/enroll.
Cau t i on
Enabling Sign in with Apple will disable management of the App Service Authentication / Authorization feature
for your application through some clients, such as the Azure portal, Azure CLI, and Azure PowerShell. The
feature relies on a new API surface which, during preview, is not yet accounted for in all management
experiences.
4. On the Register an App ID page, provide a description and a bundle ID, and select Sign in with Apple
from the capabilities list. Then select Continue . Take note of your App ID Prefix (Team ID) from this step,
you'll need it later.
7. On the Register a New Identifier page, choose Ser vices IDs and select Continue .
8. On the Register a Ser vices ID page, provide a description and an identifier. The description is what will be
shown to the user on the consent screen. The identifier will be your client ID used in configuring the Apple
provider with your app service. Then select Configure .
9. On the pop-up window, set the Primary App ID to the App ID you created earlier. Specify your application's
domain in the domain section. For the return URL, use the URL <app-url>/.auth/login/apple/callback . For
example, https://contoso.azurewebsites.net/.auth/login/apple/callback . Then select Add and Save .
10. Review the service registration information and select Save .
{
"alg": "ES256",
"kid": "URKEYID001",
}.{
"sub": "com.yourcompany.app1",
"nbf": 1560203207,
"exp": 1560289607,
"iss": "ABC123DEFG",
"aud": "https://appleid.apple.com"
}.[Signature]
Note: Apple doesn't accept client secret JWTs with an expiration date more than six months after the creation (or
nbf) date. That means you'll need to rotate your client secret, at minimum, every six months.
More information about generating and validating tokens can be found in Apple's developer documentation.
Sign the client secret JWT
You'll use the .p8 file you downloaded previously to sign the client secret JWT. This file is a PCKS#8 file that
contains the private signing key in PEM format. There are many libraries that can create and sign the JWT for
you.
There are different kinds of open-source libraries available online for creating and signing JWT tokens. For more
information about generating JWT tokens, see JSON Web Token (JWT). For example, one way of generating the
client secret is by importing the Microsoft.IdentityModel.Tokens NuGet package and running a small amount of
C# code shown below.
using Microsoft.IdentityModel.Tokens;
public static string GetAppleClientSecret(string teamId, string clientId, string keyId, string p8key)
{
string audience = "https://appleid.apple.com";
return tokenHandler.WriteToken(token);
}
IMPORTANT
The client secret is an important security credential. Do not share this secret with anyone or distribute it within a client
application.
Add the client secret as an application setting for the app, using a setting name of your choice. Make note of this
name for later.
This section will walk you through updating the configuration to include your new IDP. An example configuration
follows.
1. Within the identityProviders object, add an apple object if one doesn't already exist.
2. Assign an object to that key with a registration object within it, and optionally a login object:
"apple" : {
"registration" : {
"clientId": "<client ID>",
"clientSecretSettingName": "APP_SETTING_CONTAINING_APPLE_CLIENT_SECRET"
},
"login": {
"scopes": []
}
}
a. Within the registration object, set the clientId to the client ID you collected.
b. Within the registration object, set clientSecretSettingName to the name of the application setting
where you stored the client secret.
c. Within the login object, you may choose to set the scopes array to include a list of scopes used when
authenticating with Apple, such as "name" and "email". If scopes are configured, they'll be explicitly
requested on the consent screen when users sign in for the first time.
Once this configuration has been set, you're ready to use your Apple provider for authentication in your app.
A complete configuration might look like the following example (where the APPLE_GENERATED_CLIENT_SECRET
setting points to an application setting containing a generated JWT):
{
"platform": {
"enabled": true
},
"globalValidation": {
"redirectToProvider": "apple",
"unauthenticatedClientAction": "RedirectToLoginPage"
},
"identityProviders": {
"apple": {
"registration": {
"clientId": "com.contoso.example.client",
"clientSecretSettingName": "APPLE_GENERATED_CLIENT_SECRET"
},
"login": {
"scopes": []
}
}
},
"login": {
"tokenStore": {
"enabled": true
}
}
}
Next steps
App Service Authentication / Authorization overview.
Tutorial: Authenticate and authorize users end-to-end in Azure App Service
Customize sign-in and sign-out in Azure App
Service authentication
8/2/2022 • 6 minutes to read • Edit Online
This article shows you how to customize user sign-ins and sign-outs while using the built-in authentication and
authorization in App Service.
When the user clicks on one of the links, the respective sign-in page opens to sign in the user.
To redirect the user post-sign-in to a custom URL, use the post_login_redirect_uri query string parameter (not
to be confused with the Redirect URI in your identity provider configuration). For example, to navigate the user
to /Home/Index after sign-in, use the following HTML code:
Client-directed sign-in
In a client-directed sign-in, the application signs in the user to the identity provider using a provider-specific
SDK. The application code then submits the resulting authentication token to App Service for validation (see
Authentication flow) using an HTTP POST request. The Azure Mobile Apps SDKs use this sign-in flow. This
validation itself doesn't actually grant you access to the desired app resources, but a successful validation will
give you a session token that you can use to access app resources.
To validate the provider token, App Service app must first be configured with the desired provider. At runtime,
after you retrieve the authentication token from your provider, post the token to /.auth/login/<provider> for
validation. For example:
POST https://<appname>.azurewebsites.net/.auth/login/aad HTTP/1.1
Content-Type: application/json
{"id_token":"<token>","access_token":"<token>"}
The token format varies slightly according to the provider. See the following table for details:
twitter {"access_token":"
<access_token>",
"access_token_secret":"
<acces_token_secret>"}
If the provider token is validated successfully, the API returns with an authenticationToken in the response body,
which is your session token.
{
"authenticationToken": "...",
"user": {
"userId": "sid:..."
}
}
Once you have this session token, you can access protected app resources by adding the X-ZUMO-AUTH header to
your HTTP requests. For example:
GET https://<appname>.azurewebsites.net/api/products/1
X-ZUMO-AUTH: <authenticationToken_value>
By default, a successful sign-out redirects the client to the URL /.auth/logout/done . You can change the post-
sign-out redirect page by adding the post_logout_redirect_uri query parameter. For example:
GET /.auth/logout?post_logout_redirect_uri=/index.html
GET /.auth/logout?post_logout_redirect_uri=https%3A%2F%2Fmyexternalurl.com
"identityProviders": {
"azureActiveDirectory": {
"login": {
"loginParameters": ["domain_hint=<domain-name>"],
}
}
}
5. Click Put .
This setting appends the domain_hint query string parameter to the login redirect URL.
IMPORTANT
It's possible for the client to remove the domain_hint parameter after receiving the redirect URL, and then login with a
different domain. So while this function is convenient, it's not a security feature.
2. In the browser explorer of your App Service files, navigate to site/wwwroot. If a Web.config doesn't exist,
create it by selecting + > New File .
3. Select the pencil for Web.config to edit it. Add the following configuration code and click Save . If
Web.config already exists, just add the <authorization> element with everything in it. Add the accounts
you want to allow in the <allow> element.
<?xml version="1.0" encoding="utf-8"?>
<configuration>
<system.web>
<authorization>
<allow users="user1@contoso.com,user2@contoso.com"/>
<deny users="*"/>
</authorization>
</system.web>
</configuration>
More resources
Tutorial: Authenticate and authorize users end-to-end
Environment variables and app settings for authentication
Work with user identities in Azure App Service
authentication
8/2/2022 • 2 minutes to read • Edit Online
This article shows you how to work with user identities when using the the built-in authentication and
authorization in App Service.
NOTE
Different language frameworks may present these headers to the app code in different formats, such as lowercase or title
case.
For ASP.NET 4.6 apps, App Service populates ClaimsPrincipal.Current with the authenticated user's claims, so
you can follow the standard .NET code pattern, including the [Authorize] attribute. Similarly, for PHP apps, App
Service populates the _SERVER['REMOTE_USER'] variable. For Java apps, the claims are accessible from the Tomcat
servlet.
For Azure Functions, ClaimsPrincipal.Current is not populated for .NET code, but you can still find the user
claims in the request headers, or get the ClaimsPrincipal object from the request context or even through a
binding parameter. See working with client identities in Azure Functions for more information.
For .NET Core, Microsoft.Identity.Web supports populating the current user with App Service authentication. To
learn more, you can read about it on the Microsoft.Identity.Web wiki, or see it demonstrated in this tutorial for a
web app accessing Microsoft Graph.
Next steps
Tutorial: Authenticate and authorize users end-to-end
Work with OAuth tokens in Azure App Service
authentication
8/2/2022 • 3 minutes to read • Edit Online
This article shows you how to work with OAuth tokens while using the built-in authentication and authorization
in App Service.
P RO VIDER H EA DER N A M ES
Google X-MS-TOKEN-GOOGLE-ID-TOKEN
X-MS-TOKEN-GOOGLE-ACCESS-TOKEN
X-MS-TOKEN-GOOGLE-EXPIRES-ON
X-MS-TOKEN-GOOGLE-REFRESH-TOKEN
Twitter X-MS-TOKEN-TWITTER-ACCESS-TOKEN
X-MS-TOKEN-TWITTER-ACCESS-TOKEN-SECRET
NOTE
Different language frameworks may present these headers to the app code in different formats, such as lowercase or title
case.
From your client code (such as a mobile app or in-browser JavaScript), send an HTTP GET request to /.auth/me
(token store must be enabled). The returned JSON has the provider-specific tokens.
NOTE
Access tokens are for accessing provider resources, so they are present only if you configure your provider with a client
secret. To see how to get refresh tokens, see Refresh access tokens.
"identityProviders": {
"azureActiveDirectory": {
"login": {
"loginParameters": ["scope=openid profile email offline_access"]
}
}
}
5. Click Put .
NOTE
The scope that gives you a refresh token is offline_access. See how it's used in Tutorial: Authenticate and authorize
users end-to-end in Azure App Service. The other scopes are requested by default by App Service already. For
information on these default scopes, see OpenID Connect Scopes.
Once your provider is configured, you can find the refresh token and the expiration time for the access token in
the token store.
To refresh your access token at any time, just call /.auth/refresh in any language. The following snippet uses
jQuery to refresh your access tokens from a JavaScript client.
function refreshTokens() {
let refreshUrl = "/.auth/refresh";
$.ajax(refreshUrl) .done(function() {
console.log("Token refresh completed successfully.");
}) .fail(function() {
console.log("Token refresh failed. See application logs for details.");
});
}
If a user revokes the permissions granted to your app, your call to /.auth/me may fail with a 403 Forbidden
response. To diagnose errors, check your application logs for details.
NOTE
The grace period only applies to the App Service authenticated session, not the tokens from the identity providers. There
is no grace period for the expired provider tokens.
Next steps
Tutorial: Authenticate and authorize users end-to-end
Manage the API and runtime versions of App
Service authentication
8/2/2022 • 8 minutes to read • Edit Online
This article shows you how to customize the API and runtime versions of the built-in authentication and
authorization in App Service.
There are two versions of the management API for App Service authentication. The V2 version is required for
the "Authentication" experience in the Azure portal. An app already using the V1 API can upgrade to the V2
version once a few changes have been made. Specifically, secret configuration must be moved to slot-sticky
application settings. This can be done automatically from the "Authentication" section of the portal for your app.
The V2 API does not support creation or editing of Microsoft Account as a distinct provider as was done in V1.
Rather, it leverages the converged Microsoft Identity Platform to sign-in users with both Azure AD and personal
Microsoft accounts. When switching to the V2 API, the V1 Azure Active Directory configuration is used to
configure the Microsoft Identity Platform provider. The V1 Microsoft Account provider will be carried forward in
the migration process and continue to operate as normal, but it is recommended that you move to the newer
Microsoft Identity Platform model. See Support for Microsoft Account provider registrations to learn more.
The automated migration process will move provider secrets into application settings and then convert the rest
of the configuration into the new format. To use the automatic migration:
1. Navigate to your app in the portal and select the Authentication menu option.
2. If the app is configured using the V1 model, you will see an Upgrade button.
3. Review the description in the confirmation prompt. If you are ready to perform the migration, click Upgrade
in the prompt.
Manually managing the migration
The following steps will allow you to manually migrate the application to the V2 API if you do not wish to use
the automatic version mentioned above.
Moving secrets to application settings
1. Get your existing configuration by using the V1 API:
In the resulting JSON payload, make note of the secret value used for each provider you have configured:
AAD: clientSecret
Google: googleClientSecret
Facebook: facebookAppSecret
Twitter: twitterConsumerSecret
Microsoft Account: microsoftAccountClientSecret
IMPORTANT
The secret values are important security credentials and should be handled carefully. Do not share these values or
persist them on a local machine.
2. Create slot-sticky application settings for each secret value. You may choose the name of each application
setting. It's value should match what you obtained in the previous step or reference a Key Vault secret
that you have created with that value.
To create the setting, you can use the Azure portal or run a variation of the following for each provider:
NOTE
The application settings for this configuration should be marked as slot-sticky, meaning that they will not move
between environments during a slot swap operation. This is because your authentication configuration itself is tied
to the environment.
3. Create a new JSON file named authsettings.json .Take the output that you received previously and
remove each secret value from it. Write the remaining output to the file, making sure that no secret is
included. In some cases, the configuration may have arrays containing empty strings. Make sure that
microsoftAccountOAuthScopes does not, and if it does, switch that value to null .
4. Add a property to authsettings.json which points to the application setting name you created earlier for
each provider:
AAD: clientSecretSettingName
Google: googleClientSecretSettingName
Facebook: facebookAppSecretSettingName
Twitter: twitterConsumerSecretSettingName
Microsoft Account: microsoftAccountClientSecretSettingName
An example file after this operation might look similar to the following, in this case only configured for
AAD:
{
"id": "/subscriptions/00d563f8-5b89-4c6a-bcec-
c1b9f6d607e0/resourceGroups/myresourcegroup/providers/Microsoft.Web/sites/mywebapp/config/authsetting
s",
"name": "authsettings",
"type": "Microsoft.Web/sites/config",
"location": "Central US",
"properties": {
"enabled": true,
"runtimeVersion": "~1",
"unauthenticatedClientAction": "AllowAnonymous",
"tokenStoreEnabled": true,
"allowedExternalRedirectUrls": null,
"defaultProvider": "AzureActiveDirectory",
"clientId": "3197c8ed-2470-480a-8fae-58c25558ac9b",
"clientSecret": "",
"clientSecretSettingName": "MICROSOFT_IDENTITY_AUTHENTICATION_SECRET",
"clientSecretCertificateThumbprint": null,
"issuer": "https://sts.windows.net/0b2ef922-672a-4707-9643-9a5726eec524/",
"allowedAudiences": [
"https://mywebapp.azurewebsites.net"
],
"additionalLoginParams": null,
"isAadAutoProvisioned": true,
"aadClaimsAuthorization": null,
"googleClientId": null,
"googleClientSecret": null,
"googleClientSecretSettingName": null,
"googleOAuthScopes": null,
"facebookAppId": null,
"facebookAppSecret": null,
"facebookAppSecretSettingName": null,
"facebookOAuthScopes": null,
"gitHubClientId": null,
"gitHubClientSecret": null,
"gitHubClientSecretSettingName": null,
"gitHubOAuthScopes": null,
"twitterConsumerKey": null,
"twitterConsumerSecret": null,
"twitterConsumerSecretSettingName": null,
"microsoftAccountClientId": null,
"microsoftAccountClientSecret": null,
"microsoftAccountClientSecretSettingName": null,
"microsoftAccountOAuthScopes": null,
"isAuthFromFile": "false"
}
}
5. Submit this file as the new Authentication/Authorization configuration for your app:
6. Validate that your app is still operating as expected after this gesture.
7. Delete the file used in the previous steps.
You have now migrated the app to store identity provider secrets as application settings.
Support for Microsoft Account provider registrations
If your existing configuration contains a Microsoft Account provider and does not contain an Azure Active
Directory provider, you can switch the configuration over to the Azure Active Directory provider and then
perform the migration. To do this:
1. Go to App registrations in the Azure portal and find the registration associated with your Microsoft
Account provider. It may be under the "Applications from personal account" heading.
2. Navigate to the "Authentication" page for the registration. Under "Redirect URIs" you should see an entry
ending in /.auth/login/microsoftaccount/callback . Copy this URI.
3. Add a new URI that matches the one you just copied, except instead have it end in /.auth/login/aad/callback
. This will allow the registration to be used by the App Service Authentication / Authorization configuration.
4. Navigate to the App Service Authentication / Authorization configuration for your app.
5. Collect the configuration for the Microsoft Account provider.
6. Configure the Azure Active Directory provider using the "Advanced" management mode, supplying the client
ID and client secret values you collected in the previous step. For the Issuer URL, use Use
<authentication-endpoint>/<tenant-id>/v2.0 , and replace <authentication-endpoint> with the authentication
endpoint for your cloud environment (e.g., "https://login.microsoftonline.com" for global Azure), also
replacing <tenant-id> with your Director y (tenant) ID .
7. Once you have saved the configuration, test the login flow by navigating in your browser to the
/.auth/login/aad endpoint on your site and complete the sign-in flow.
8. At this point, you have successfully copied the configuration over, but the existing Microsoft Account provider
configuration remains. Before you remove it, make sure that all parts of your app reference the Azure Active
Directory provider through login links, etc. Verify that all parts of your app work as expected.
9. Once you have validated that things work against the AAD Azure Active Directory provider, you may remove
the Microsoft Account provider configuration.
WARNING
It is possible to converge the two registrations by modifying the supported account types for the AAD app registration.
However, this would force a new consent prompt for Microsoft Account users, and those users' identity claims may be
different in structure, sub notably changing values since a new App ID is being used. This approach is not recommended
unless thoroughly understood. You should instead wait for support for the two registrations in the V2 API surface.
Switching to V2
Once the above steps have been performed, navigate to the app in the Azure portal. Select the "Authentication
(preview)" section.
Alternatively, you may make a PUT request against the config/authsettingsv2 resource under the site resource.
The schema for the payload is the same as captured in File-based configuration.
Using the Azure CLI, view the current middleware version with the az webapp auth show command.
In this code, replace <my_app_name> with the name of your app. Also replace <my_resource_group> with the name
of the resource group for your app.
You will see the runtimeVersion field in the CLI output. It will resemble the following example output, which has
been truncated for clarity:
{
"additionalLoginParams": null,
"allowedAudiences": null,
...
"runtimeVersion": "1.3.2",
...
}
F r o m t h e v e r si o n e n d p o i n t
You can also hit /.auth/version endpoint on an app also to view the current middleware version that the app is
running on. It will resemble the following example output:
{
"version": "1.3.2"
}
Replace <my_app_name> with the name of your app. Also replace <my_resource_group> with the name of the
resource group for your app. Also, replace <version> with a valid version of the 1.x runtime or ~1 for the latest
version. See the release notes on the different runtime versions to help determine the version to pin to.
You can run this command from the Azure Cloud Shell by choosing Tr y it in the preceding code sample. You can
also use the Azure CLI locally to execute this command after executing az login to sign in.
Next steps
Tutorial: Authenticate and authorize users end-to-end
File-based configuration in Azure App Service
authentication
8/2/2022 • 3 minutes to read • Edit Online
With App Service authentication, the authentication settings can be configured with a file. You may need to use
file-based configuration to use certain preview capabilities of App Service authentication / authorization before
they are exposed via Azure Resource Manager APIs.
IMPORTANT
Remember that your app payload, and therefore this file, may move between environments, as with slots. It is likely you
would want a different app registration pinned to each slot, and in these cases, you should continue to use the standard
configuration method instead of using the configuration file.
NOTE
The format for platform.configFilePath varies between platforms. On Windows, both relative and absolute paths are
supported. Relative is recommended. For Linux, only absolute paths are supported currently, so the value of the setting
should be "/home/site/wwwroot/auth.json" or similar.
Once you have made this configuration update, the contents of the file will be used to define the behavior of
App Service Authentication / Authorization for that site. If you ever wish to return to Azure Resource Manager
configuration, you can do so by removing changing the setting platform.configFilePath to null.
{
"platform": {
"enabled": <true|false>
},
"globalValidation": {
"unauthenticatedClientAction": "RedirectToLoginPage|AllowAnonymous|RejectWith401|RejectWith404",
"redirectToProvider": "<default provider alias>",
"excludedPaths": [
"/path1",
"/path2",
"/path3/subpath/*"
]
},
"httpSettings": {
"requireHttps": <true|false>,
"routes": {
"apiPrefix": "<api prefix>"
},
"forwardProxy": {
"convention": "NoProxy|Standard|Custom",
"customHostHeaderName": "<host header value>",
"customProtoHeaderName": "<proto header value>"
}
},
"login": {
"routes": {
"logoutEndpoint": "<logout endpoint>"
},
"tokenStore": {
"enabled": <true|false>,
"tokenRefreshExtensionHours": "<double>",
"fileSystem": {
"directory": "<directory to store the tokens in if using a file system token store
(default)>"
},
"azureBlobStorage": {
"sasUrlSettingName": "<app setting name containing the sas url for the Azure Blob Storage if
opting to use that for a token store>"
}
},
"preserveUrlFragmentsForLogins": <true|false>,
"allowedExternalRedirectUrls": [
"https://uri1.azurewebsites.net/",
"https://uri2.azurewebsites.net/",
"url_scheme_of_your_app://easyauth.callback"
],
"cookieExpiration": {
"convention": "FixedTime|IdentityDerived",
"timeToExpiration": "<timespan>"
},
"nonce": {
"validateNonce": <true|false>,
"nonceExpirationInterval": "<timespan>"
}
},
"identityProviders": {
"azureActiveDirectory": {
"enabled": <true|false>,
"registration": {
"openIdIssuer": "<issuer url>",
"clientId": "<app id>",
"clientSecretSettingName": "APP_SETTING_CONTAINING_AAD_SECRET",
},
"login": {
"loginParameters": [
"paramName1=value1",
"paramName2=value2"
]
},
"validation": {
"allowedAudiences": [
"audience1",
"audience2"
]
}
},
"facebook": {
"enabled": <true|false>,
"registration": {
"appId": "<app id>",
"appSecretSettingName": "APP_SETTING_CONTAINING_FACEBOOK_SECRET"
},
"graphApiVersion": "v3.3",
"login": {
"scopes": [
"public_profile",
"email"
]
},
},
"gitHub": {
"enabled": <true|false>,
"registration": {
"clientId": "<client id>",
"clientSecretSettingName": "APP_SETTING_CONTAINING_GITHUB_SECRET"
},
"login": {
"scopes": [
"profile",
"email"
]
}
},
"google": {
"enabled": true,
"registration": {
"clientId": "<client id>",
"clientSecretSettingName": "APP_SETTING_CONTAINING_GOOGLE_SECRET"
},
"login": {
"scopes": [
"profile",
"email"
]
},
"validation": {
"allowedAudiences": [
"audience1",
"audience2"
]
}
},
"twitter": {
"enabled": <true|false>,
"registration": {
"consumerKey": "<consumer key>",
"consumerSecretSettingName": "APP_SETTING_CONTAINING TWITTER_CONSUMER_SECRET"
}
},
"apple": {
"enabled": <true|false>,
"registration": {
"clientId": "<client id>",
"clientSecretSettingName": "APP_SETTING_CONTAINING_APPLE_SECRET"
},
"login": {
"scopes": [
"profile",
"email"
]
]
}
},
"openIdConnectProviders": {
"<providerName>": {
"enabled": <true|false>,
"registration": {
"clientId": "<client id>",
"clientCredential": {
"clientSecretSettingName": "<name of app setting containing client secret>"
},
"openIdConnectConfiguration": {
"authorizationEndpoint": "<url specifying authorization endpoint>",
"tokenEndpoint": "<url specifying token endpoint>",
"issuer": "<url specifying issuer>",
"certificationUri": "<url specifying jwks endpoint>",
"wellKnownOpenIdConfiguration": "<url specifying .well-known/open-id-configuration
endpoint - if this property is set, the other properties of this object are ignored, and
authorizationEndpoint, tokenEndpoint, issuer, and certificationUri are set to the corresponding values
listed at this endpoint>"
}
},
"login": {
"nameClaimType": "<name of claim containing name>",
"scopes": [
"openid",
"profile",
"email"
],
"loginParameterNames": [
"paramName1=value1",
"paramName2=value2"
],
}
},
//...
}
}
}
More resources
Tutorial: Authenticate and authorize users end-to-end
Environment variables and app settings for authentication
Add a TLS/SSL certificate in Azure App Service
8/2/2022 • 21 minutes to read • Edit Online
Azure App Service provides a highly scalable, self-patching web hosting service. This article shows you how to
create, upload, or import a private certificate or a public certificate into App Service.
Once the certificate is added to your App Service app or function app, you can secure a custom DNS name with
it or use it in your application code.
NOTE
A certificate uploaded into an app is stored in a deployment unit that is bound to the app service plan's resource group,
region and operating system combination (internally called a webspace). This makes the certificate accessible to other
apps in the same resource group and region combination.
The following table lists the options you have for adding certificates in App Service:
O P T IO N DESC RIP T IO N
Create a free App Service managed certificate A private certificate that's free of charge and easy to use if
you just need to secure your custom domain in App Service.
Purchase an App Service certificate A private certificate that's managed by Azure. It combines
the simplicity of automated certificate management and the
flexibility of renewal and export options.
Import a certificate from Key Vault Useful if you use Azure Key Vault to manage your PKCS12
certificates. See Private certificate requirements.
Upload a private certificate If you already have a private certificate from a third-party
provider, you can upload it. See Private certificate
requirements.
Upload a public certificate Public certificates are not used to secure custom domains,
but you can load them into your code if you need them to
access remote resources.
Prerequisites
Create an App Service app.
For a private certificate, make sure that it satisfies all requirements from App Service.
Free cer tificate only :
Map the domain you want a certificate for to App Service. For information, see Tutorial: Map an
existing custom DNS name to Azure App Service.
For a root domain (like contoso.com), make sure your app doesn't have any IP restrictions configured.
Both certificate creation and its periodic renewal for a root domain depends on your app being
reachable from the internet.
NOTE
Elliptic Cur ve Cr yptography (ECC) cer tificates can work with App Service but are not covered by this article. Work
with your certificate authority on the exact steps to create ECC certificates.
Check to make sure that your web app is not in the F1 or D1 tier. Your web app's current tier is highlighted by a
dark blue box.
Custom SSL is not supported in the F1 or D1 tier. If you need to scale up, follow the steps in the next section.
Otherwise, close the Scale up page and skip the Scale up your App Service plan section.
Scale up your App Service plan
Select any of the non-free tiers (B1 , B2 , B3 , or any tier in the Production category). For additional options, click
See additional options .
Click Apply .
When you see the following notification, the scale operation is complete.
IMPORTANT
Because Azure fully manages the certificates on your behalf, any aspect of the managed certificate, including the root
issuer, can be changed at anytime. These changes are outside of your control. You should avoid having a hard
dependency or practice certificate "pinning" to the managed certificate, or to any part of the certificate hierarchy. If you
need the certificate pinning behavior, add a certificate to your custom domain using any other available method in this
article.
Apex domain
Subdomain
NOTE
The free certificate is issued by DigiCert. For some domains, you must explicitly allow DigiCert as a certificate issuer by
creating a CAA domain record with the value: 0 issue digicert.com .
In the Azure portal, from the left menu, select App Ser vices > <app-name> .
From the left navigation of your app, select TLS/SSL settings > Private Key Cer tificates (.pfx) > Create
App Ser vice Managed Cer tificate .
Select the custom domain to create a free certificate for and select Create . You can create only one certificate
for each supported custom domain.
When the operation completes, you see the certificate in the Private Key Cer tificates list.
IMPORTANT
To secure a custom domain with this certificate, you still need to create a certificate binding. Follow the steps in Create
binding.
NOTE
All prices shown are for examples only.
Use the following table to help you configure the certificate. When finished, click Create .
Resource group The resource group that will contain the certificate. You can
use a new resource group or select the same resource group
as your App Service app, for example.
SET T IN G DESC RIP T IO N
Naked Domain Host Name Specify the root domain here. The issued certificate secures
both the root domain and the www subdomain. In the
issued certificate, the Common Name field contains the root
domain, and the Subject Alternative Name field contains the
www domain. To secure any subdomain only, specify the
fully qualified domain name of the subdomain here (for
example, mysubdomain.contoso.com ).
NOTE
App Service Certificates purchased from Azure are issued by GoDaddy. For some domains, you must explicitly allow
GoDaddy as a certificate issuer by creating a CAA domain record with the value: 0 issue godaddy.com
Key Vault is an Azure service that helps safeguard cryptographic keys and secrets used by cloud applications
and services. It's the storage of choice for App Service certificates.
In the Key Vault Status page, click Key Vault Repositor y to create a new vault or choose an existing vault. If
you choose to create a new vault, use the following table to help you configure the vault and click Create. Create
the new Key Vault inside the same subscription and resource group as your App Service app.
Pricing tier For information, see Azure Key Vault pricing details.
Access policies Defines the applications and the allowed access to the vault
resources. You can configure it later, following the steps at
Assign a Key Vault access policy.
Virtual Network Access Restrict vault access to certain Azure virtual networks. You
can configure it later, following the steps at Configure Azure
Key Vault Firewalls and Virtual Networks
Once you've selected the vault, close the Key Vault Repositor y page. The Step 1: Store option should show a
green check mark for success. Keep the page open for the next step.
NOTE
Currently, App Service Certificate only supports Key Vault access policy but not RBAC model.
Select App Ser vice Verification . Since you already mapped the domain to your web app (see Prerequisites),
it's already verified. Just click Verify to finish this step. Click the Refresh button until the message Cer tificate
is Domain Verified appears.
IMPORTANT
For a Standard certificate, the certificate provider gives you a certificate for the requested top-level domain and its www
subdomain (for example, contoso.com and www.contoso.com ). However, beginning on December 1, 2021, a restriction
is introduced on the App Ser vice and the Manual verification methods. Both of them use HTML page verification to
verify domain ownership. With this method, the certificate provider is no longer allowed to include the www subdomain
when issuing, rekeying, or renewing a certificate.
The Domain and Mail verification methods continue to include the www subdomain with the requested top-level
domain in the certificate.
NOTE
Four types of domain verification methods are supported:
App Ser vice - The most convenient option when the domain is already mapped to an App Service app in the same
subscription. It takes advantage of the fact that the App Service app has already verified the domain ownership (see
previous note).
Domain - Verify an App Service domain that you purchased from Azure. Azure automatically adds the verification TXT
record for you and completes the process.
Mail - Verify the domain by sending an email to the domain administrator. Instructions are provided when you select
the option.
Manual - Verify the domain using either an HTML page (Standard certificate only, see previous note) or a DNS TXT
record. Instructions are provided when you select the option. The HTML page option doesn't work for web apps with
"Https Only" enabled.
NOTE
Currently, Key Vault Certificate only supports Key Vault access policy but not RBAC model.
Key Vault The vault with the certificate you want to import.
Certificate Select from the list of PKCS12 certificates in the vault. All
PKCS12 certificates in the vault are listed with their
thumbprints, but not all are supported in App Service.
When the operation completes, you see the certificate in the Private Key Cer tificates list. If the import fails
with an error, the certificate doesn't meet the requirements for App Service.
NOTE
If you update your certificate in Key Vault with a new certificate, App Service automatically syncs your certificate within 24
hours.
IMPORTANT
To secure a custom domain with this certificate, you still need to create a certificate binding. Follow the steps in Create
binding.
-----BEGIN CERTIFICATE-----
<your entire Base64 encoded SSL certificate>
-----END CERTIFICATE-----
-----BEGIN CERTIFICATE-----
<The entire Base64 encoded intermediate certificate 1>
-----END CERTIFICATE-----
-----BEGIN CERTIFICATE-----
<The entire Base64 encoded intermediate certificate 2>
-----END CERTIFICATE-----
-----BEGIN CERTIFICATE-----
<The entire Base64 encoded root certificate>
-----END CERTIFICATE-----
When prompted, define an export password. You'll use this password when uploading your TLS/SSL certificate
to App Service later.
If you used IIS or Certreq.exe to generate your certificate request, install the certificate to your local machine,
and then export the certificate to PFX.
Upload certificate to App Service
You're now ready upload the certificate to App Service.
In the Azure portal, from the left menu, select App Ser vices > <app-name> .
From the left navigation of your app, select TLS/SSL settings > Private Key Cer tificates (.pfx) > Upload
Cer tificate .
In PFX Cer tificate File , select your PFX file. In Cer tificate password , type the password that you created
when you exported the PFX file. When finished, click Upload .
When the operation completes, you see the certificate in the Private Key Cer tificates list.
IMPORTANT
To secure a custom domain with this certificate, you still need to create a certificate binding. Follow the steps in Create
binding.
NOTE
The renewal process requires that the well-known service principal for App Service has the required permissions on your
key vault. This permission is configured for you when you import an App Service Certificate through the portal, and
should not be removed from your key vault.
By default, App Service certificates have a one-year validity period. Near the time of expiration, App Service
certificates, can be renewed in one-year increments automatically or manually. In effect, th renewal process
gives you a new App Service certificate with the expiration date extended to one year from the existing
certificate's expiration date.
To toggle the automatic renewal setting of your App Service certificate at any time, select the certificate in the
App Service Certificates page, then click Auto Renew Settings in the left navigation.
Select On or Off and click Save . Certificates can start automatically renewing 32 days before expiration if you
have automatic renewal turned on.
To manually renew the certificate instead, click Manual Renew . You can request to manually renew your
certificate 60 days before expiration.
Once the renew operation is complete, click Sync . The sync operation automatically updates the hostname
bindings for the certificate in App Service without causing any downtime to your apps.
NOTE
If you don't click Sync, App Service automatically syncs your certificate within 24 hours.
Rekeying your certificate rolls the certificate with a new certificate issued from the certificate authority.
You may be required to reverify domain ownership.
Once the rekey operation is complete, click Sync . The sync operation automatically updates the hostname
bindings for the certificate in App Service without causing any downtime to your apps.
NOTE
If you don't click Sync, App Service automatically syncs your certificate within 24 hours.
Export certificate
Because an App Service Certificate is a Key Vault secret, you can export a PFX copy of it and use it for other
Azure services or outside of Azure.
NOTE
The exported certificate is an unmanaged artifact. For example, it isn't synced when the App Service Certificate is renewed.
You must export the renewed certificate and install it where you need it.
Azure portal
Azure CLI
1. Select the certificate in the App Service Certificates page, then select Expor t Cer tificate from the left
navigation.
2. Select Open in Key Vault .
3. Select the current version of the certificate.
4. Select Download as a cer tificate .
The downloaded PFX file is a raw PKCS12 file that contains both the public and private certificates, and its
import password is an empty string. You can install it locally by leaving the password field empty. Notable is the
fact that it can't be uploaded into App Service as-is because it's not password protected.
Delete certificate
Deletion of an App Service certificate is final and irreversible. Deletion of an App Service Certificate resource
results in the certificate being revoked. Any binding in App Service with this certificate becomes invalid. To
prevent accidental deletion, Azure puts a lock on the certificate. To delete an App Service certificate, you must
first remove the delete lock on the certificate.
Select the certificate in the App Service Certificates page, then select Locks in the left navigation.
Find the lock on your certificate with the lock type Delete . To the right of it, select Delete .
Now you can delete the App Service certificate. From the left navigation, select Over view > Delete . In the
confirmation dialog, type the certificate name and select OK .
# Before continuing, go to your DNS configuration UI for your custom domain and follow the
# instructions at https://aka.ms/appservicecustomdns to configure a CNAME record for the
# hostname "www" and point it your web app's default domain name.
# Upgrade App Service plan to Basic tier (minimum required by custom SSL certificates)
Set-AzAppServicePlan -Name $webappname -ResourceGroupName $webappname `
-Tier Basic
More resources
Secure a custom DNS name with a TLS/SSL binding in Azure App Service
Enforce HTTPS
Enforce TLS 1.1/1.2
Use a TLS/SSL certificate in your code in Azure App Service
FAQ : App Service Certificates
Set up Azure App Service access restrictions
8/2/2022 • 9 minutes to read • Edit Online
By setting up access restrictions, you can define a priority-ordered allow/deny list that controls network access
to your app. The list can include IP addresses or Azure Virtual Network subnets. When there are one or more
entries, an implicit deny all exists at the end of the list.
The access restriction capability works with all Azure App Service-hosted workloads. The workloads can include
web apps, API apps, Linux apps, Linux custom containers and Functions.
When a request is made to your app, the FROM address is evaluated against the rules in your access restriction
list. If the FROM address is in a subnet that's configured with service endpoints to Microsoft.Web, the source
subnet is compared against the virtual network rules in your access restriction list. If the address isn't allowed
access based on the rules in the list, the service replies with an HTTP 403 status code.
The access restriction capability is implemented in the App Service front-end roles, which are upstream of the
worker hosts where your code runs. Therefore, access restrictions are effectively network access-control lists
(ACLs).
The ability to restrict access to your web app from an Azure virtual network is enabled by service endpoints.
With service endpoints, you can restrict access to a multi-tenant service from selected subnets. It doesn't work
to restrict traffic to apps that are hosted in an App Service Environment. If you're in an App Service Environment,
you can control access to your app by applying IP address rules.
NOTE
The service endpoints must be enabled both on the networking side and for the Azure service that they're being enabled
with. For a list of Azure services that support service endpoints, see Virtual Network service endpoints.
5. On the Access Restrictions page, review the list of access restriction rules that are defined for your app.
The list displays all the current restrictions that are applied to the app. If you have a virtual network
restriction on your app, the table shows whether the service endpoints are enabled for Microsoft.Web. If
no restrictions are defined on your app, the app is accessible from anywhere.
Permissions
You must have at least the following Role-based access control permissions on the subnet or at a higher level to
configure access restrictions through Azure portal, CLI or when setting the site config properties directly:
A C T IO N DESC RIP T IO N
NOTE
There is a limit of 512 access restriction rules. If you require more than 512 access restriction rules, we suggest that
you consider installing a standalone security product, such as Azure Front Door, Azure App Gateway, or an alternative
WAF.
All available service tags are supported in access restriction rules. Each service tag represents a list of IP ranges
from Azure services. A list of these services and links to the specific ranges can be found in the service tag
documentation. Use Azure Resource Manager templates or scripting to configure more advanced rules like
regional scoped rules.
Edit a rule
1. To begin editing an existing access restriction rule, on the Access Restrictions page, select the rule you
want to edit.
2. On the Edit Access Restriction pane, make your changes, and then select Update rule . Edits are
effective immediately, including changes in priority ordering.
NOTE
When you edit a rule, you can't switch between rule types.
Delete a rule
To delete a rule, on the Access Restrictions page, select the ellipsis (...) next to the rule you want to delete, and
then select Remove .
PowerShell example:
NOTE
Working with service tags, http headers or multi-source rules in Azure CLI requires at least version 2.23.0. You can
verify the version of the installed module with: az version
You can also set values manually by doing either of the following:
Use an Azure REST API PUT operation on the app configuration in Azure Resource Manager. The location
for this information in Azure Resource Manager is:
management.azure.com/subscriptions/subscription ID /resourceGroups/resource
groups /providers/Microsoft.Web/sites/web app name /config/web?api-version=2020-06-01
Use a Resource Manager template. As an example, you can use resources.azure.com and edit the
ipSecurityRestrictions block to add the required JSON.
The JSON syntax for the earlier example is:
{
"properties": {
"ipSecurityRestrictions": [
{
"ipAddress": "122.133.144.0/24",
"action": "Allow",
"priority": 100,
"name": "IP example rule"
}
]
}
}
The JSON syntax for an advanced example using service tag and http header restriction is:
{
"properties": {
"ipSecurityRestrictions": [
{
"ipAddress": "AzureFrontDoor.Backend",
"tag": "ServiceTag",
"action": "Allow",
"priority": 100,
"name": "Azure Front Door example",
"headers": {
"x-azure-fdid": [
"xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx"
]
}
}
]
}
}
Next steps
Access restrictions for Azure Functions
Application Gateway integration with service endpoints
How to use managed identities for App Service and
Azure Functions
8/2/2022 • 12 minutes to read • Edit Online
This article shows you how to create a managed identity for App Service and Azure Functions applications and
how to use it to access other resources.
IMPORTANT
Managed identities for App Service and Azure Functions won't behave as expected if your app is migrated across
subscriptions/tenants. The app needs to obtain a new identity, which is done by disabling and re-enabling the feature.
Downstream resources also need to have access policies updated to use the new identity.
NOTE
Managed identities are not available for apps deployed in Azure Arc.
A managed identity from Azure Active Directory (Azure AD) allows your app to easily access other Azure AD-
protected resources such as Azure Key Vault. The identity is managed by the Azure platform and does not
require you to provision or rotate any secrets. For more about managed identities in Azure AD, see Managed
identities for Azure resources.
Your application can be granted two types of identities:
A system-assigned identity is tied to your application and is deleted if your app is deleted. An app can
only have one system-assigned identity.
A user-assigned identity is a standalone Azure resource that can be assigned to your app. An app can have
multiple user-assigned identities.
1. In the left navigation of your app's page, scroll down to the Settings group.
2. Select Identity .
3. Within the System assigned tab, switch Status to On . Click Save .
NOTE
To find the managed identity for your web app or slot app in the Azure portal, under Enterprise applications , look in
the User settings section. Usually, the slot name is similar to <app name>/slots/<slot name> .
Azure portal
Azure CLI
Azure PowerShell
ARM template
IMPORTANT
The back-end services for managed identities maintain a cache per resource URI for around 24 hours. If you update the
access policy of a particular target resource and immediately retrieve a token for that resource, you may continue to get a
cached token with outdated permissions until that token expires. There's currently no way to force a token refresh.
HTTP GET
.NET
JavaScript
Python
Java
PowerShell
{
"access_token": "eyJ0eXAi…",
"expires_on": "1586984735",
"resource": "https://vault.azure.net",
"token_type": "Bearer",
"client_id": "5E29463D-71DA-4FE0-8E69-999B57DB23B0"
}
This response is the same as the response for the Azure AD service-to-service access token request. To access
Key Vault, you will then add the value of access_token to a client connection with the vault.
For more information on the REST endpoint, see REST endpoint reference.
Remove an identity
When you remove a system-assigned identity, it's deleted from Azure Active Directory. System-assigned
identities are also automatically removed from Azure Active Directory when you delete the app resource itself.
Azure portal
Azure CLI
Azure PowerShell
ARM template
1. In the left navigation of your app's page, scroll down to the Settings group.
2. Select Identity . Then follow the steps based on the identity type:
System-assigned identity : Within the System assigned tab, switch Status to Off . Click Save .
User-assigned identity : Click the User assigned tab, select the checkbox for the identity, and click
Remove . Click Yes to confirm.
NOTE
There is also an application setting that can be set, WEBSITE_DISABLE_MSI, which just disables the local token service.
However, it leaves the identity in place, and tooling will still show the managed identity as "on" or "enabled." As a result,
use of this setting is not recommended.
PA RA M ET ER N A M E IN DESC RIP T IO N
PA RA M ET ER N A M E IN DESC RIP T IO N
IMPORTANT
If you are attempting to obtain tokens for user-assigned identities, you must include one of the optional properties.
Otherwise the token service will attempt to obtain a token for a system-assigned identity, which may or may not exist.
Next steps
Tutorial: Connect to SQL Database from App Service without secrets using a managed identity
Access Azure Storage securely using a managed identity
Call Microsoft Graph securely using a managed identity
Connect securely to services with Key Vault secrets
Use Key Vault references for App Service and Azure
Functions
8/2/2022 • 8 minutes to read • Edit Online
This topic shows you how to work with secrets from Azure Key Vault in your App Service or Azure Functions
application without requiring any code changes. Azure Key Vault is a service that provides centralized secrets
management, with full control over access policies and audit history.
Azure CLI
Azure PowerShell
2. Make sure that the vault's configuration accounts for the network or subnet through which your app will
access it.
NOTE
Windows container currently does not support Key Vault references over VNet Integration.
Azure CLI
Azure PowerShell
Reference syntax
A Key Vault reference is of the form @Microsoft.KeyVault({referenceString}) , where {referenceString} is
replaced by one of the following options:
@Microsoft.KeyVault(SecretUri=https://myvault.vault.azure.net/secrets/mysecret/)
Alternatively:
@Microsoft.KeyVault(VaultName=myvault;SecretName=mysecret)
Rotation
If a version is not specified in the reference, then the app will use the latest version that exists in the key vault.
When newer versions become available, such as with a rotation event, the app will automatically update and
begin using the latest version within 24 hours. The delay is because App Service caches the values of the key
vault references and refetches it every 24 hours. Any configuration changes to the app that results in a site
restart causes an immediate refetch of all referenced secrets.
TIP
Most application settings using Key Vault references should be marked as slot settings, as you should have separate
vaults for each environment.
If you skip validation and either the connection string or content share are invalid, the app will be unable to start
properly and will only serve HTTP 500 errors.
As part of creating the site, it is also possible that attempted mounting of the content share could fail due to
managed identity permissions not being propagated or the virtual network integration not being set up. You can
defer setting up Azure Files until later in the deployment template to accommodate this. See Azure Resource
Manager deployment to learn more. App Service will use a default file system until Azure Files is set up, and files
are not copied over, so you will need to ensure that no deployment attempts occur during the interim period
before Azure Files is mounted.
Azure Resource Manager deployment
When automating resource deployments through Azure Resource Manager templates, you may need to
sequence your dependencies in a particular order to make this feature work. Of note, you will need to define
your application settings as their own resource, rather than using a siteConfig property in the site definition.
This is because the site needs to be defined first so that the system-assigned identity is created with it and can
be used in the access policy.
An example pseudo-template for a function app might look like the following:
{
//...
"resources": [
{
"type": "Microsoft.Storage/storageAccounts",
"name": "[variables('storageAccountName')]",
//...
},
{
"type": "Microsoft.Insights/components",
"name": "[variables('appInsightsName')]",
//...
},
{
"type": "Microsoft.Web/sites",
"name": "[variables('functionAppName')]",
"identity": {
"type": "SystemAssigned"
},
//...
"resources": [
{
"type": "config",
"name": "appsettings",
//...
"dependsOn": [
"[resourceId('Microsoft.Web/sites', variables('functionAppName'))]",
"[resourceId('Microsoft.KeyVault/vaults/', variables('keyVaultName'))]",
"[resourceId('Microsoft.KeyVault/vaults/secrets', variables('keyVaultName'),
variables('storageConnectionStringName'))]",
"[resourceId('Microsoft.KeyVault/vaults/secrets', variables('keyVaultName'),
variables('appInsightsKeyName'))]"
],
"properties": {
"AzureWebJobsStorage": "[concat('@Microsoft.KeyVault(SecretUri=',
reference(variables('storageConnectionStringResourceId')).secretUriWithVersion, ')')]",
"WEBSITE_CONTENTAZUREFILECONNECTIONSTRING": "
[concat('@Microsoft.KeyVault(SecretUri=',
reference(variables('storageConnectionStringResourceId')).secretUriWithVersion, ')')]",
"APPINSIGHTS_INSTRUMENTATIONKEY": "[concat('@Microsoft.KeyVault(SecretUri=',
reference(variables('appInsightsKeyResourceId')).secretUriWithVersion, ')')]",
"WEBSITE_ENABLE_SYNC_UPDATE_SITE": "true"
//...
}
},
{
"type": "sourcecontrols",
"name": "web",
//...
"dependsOn": [
"[resourceId('Microsoft.Web/sites', variables('functionAppName'))]",
"[resourceId('Microsoft.Web/sites/config', variables('functionAppName'),
'appsettings')]"
],
}
]
},
{
"type": "Microsoft.KeyVault/vaults",
"name": "[variables('keyVaultName')]",
//...
"dependsOn": [
"[resourceId('Microsoft.Web/sites', variables('functionAppName'))]"
],
"properties": {
//...
"accessPolicies": [
{
"tenantId": "[reference(resourceId('Microsoft.Web/sites/',
variables('functionAppName')), '2020-12-01', 'Full').identity.tenantId]",
"objectId": "[reference(resourceId('Microsoft.Web/sites/',
variables('functionAppName')), '2020-12-01', 'Full').identity.principalId]",
"permissions": {
"secrets": [ "get" ]
}
}
]
},
"resources": [
{
"type": "secrets",
"name": "[variables('storageConnectionStringName')]",
//...
"dependsOn": [
"[resourceId('Microsoft.KeyVault/vaults/', variables('keyVaultName'))]",
"[resourceId('Microsoft.Storage/storageAccounts', variables('storageAccountName'))]"
],
"properties": {
"value": "[concat('DefaultEndpointsProtocol=https;AccountName=',
variables('storageAccountName'), ';AccountKey=', listKeys(variables('storageAccountResourceId'),'2019-09-
01').key1)]"
}
},
{
"type": "secrets",
"name": "[variables('appInsightsKeyName')]",
//...
"dependsOn": [
"[resourceId('Microsoft.KeyVault/vaults/', variables('keyVaultName'))]",
"[resourceId('Microsoft.Insights/components', variables('appInsightsName'))]"
],
"properties": {
"value": "[reference(resourceId('microsoft.insights/components/',
variables('appInsightsName')), '2019-09-01').InstrumentationKey]"
}
}
]
}
]
}
NOTE
In this example, the source control deployment depends on the application settings. This is normally unsafe behavior, as
the app setting update behaves asynchronously. However, because we have included the
WEBSITE_ENABLE_SYNC_UPDATE_SITE application setting, the update is synchronous. This means that the source control
deployment will only begin once the application settings have been fully updated. For more app settings, see Environment
variables and app settings in Azure App Service.
Troubleshooting Key Vault References
If a reference is not resolved properly, the reference value will be used instead. This means that for application
settings, an environment variable would be created whose value has the @Microsoft.KeyVault(...) syntax. This
may cause the application to throw errors, as it was expecting a secret of a certain structure.
Most commonly, this is due to a misconfiguration of the Key Vault access policy. However, it could also be due to
a secret no longer existing or a syntax error in the reference itself.
If the syntax is correct, you can view other causes for error by checking the current resolution status in the
portal. Navigate to Application Settings and select "Edit" for the reference in question. Below the setting
configuration, you should see status information, including any errors. The absence of these implies that the
reference syntax is invalid.
You can also use one of the built-in detectors to get additional information.
Using the detector for App Service
1. In the portal, navigate to your app.
2. Select Diagnose and solve problems .
3. Choose Availability and Performance and select Web app down.
4. Find Key Vault Application Settings Diagnostics and click More info .
Using the detector for Azure Functions
1. In the portal, navigate to your app.
2. Navigate to Platform features.
3. Select Diagnose and solve problems .
4. Choose Availability and Performance and select Function app down or repor ting errors.
5. Click on Key Vault Application Settings Diagnostics.
Encrypt your application data at rest using
customer-managed keys
8/2/2022 • 4 minutes to read • Edit Online
Encrypting your function app's application data at rest requires an Azure Storage Account and an Azure Key
Vault. These services are used when you run your app from a deployment package.
Azure Storage provides encryption at rest. You can use system-provided keys or your own, customer-
managed keys. This is where your application data is stored when it's not running in a function app in Azure.
Running from a deployment package is a deployment feature of App Service. It allows you to deploy your
site content from an Azure Storage Account using a Shared Access Signature (SAS) URL.
Key Vault references are a security feature of App Service. It allows you to import secrets at runtime as
application settings. Use this to encrypt the SAS URL of your Azure Storage Account.
NOTE
Save this SAS URL, this is used later to enable secure access of the deployment package at runtime.
Adding this application setting causes your function app to restart. After the app has restarted, browse to it and
make sure that the app has started correctly using the deployment package. If the application didn't start
correctly, see the Run from package troubleshooting guide.
Encrypt the application setting using Key Vault references
Now you can replace the value of the WEBSITE_RUN_FROM_PACKAGE application setting with a Key Vault reference to
the SAS-encoded URL. This keeps the SAS URL encrypted in Key Vault, which provides an extra layer of security.
1. Use the following az keyvault create command to create a Key Vault instance.
2. Follow these instructions to grant your app access to your key vault:
3. Use the following az keyvault secret set command to add your external URL as a secret in your key
vault:
4. Use the following az webapp config appsettings set command to create the WEBSITE_RUN_FROM_PACKAGE
application setting with the value as a Key Vault reference to the external URL:
The <secret-version> will be in the output of the previous az keyvault secret set command.
Updating this application setting causes your function app to restart. After the app has restarted, browse to it
make sure it has started correctly using the Key Vault reference.
3. Update the key vault reference in your application setting to the new secret version:
The <secret-version> will be in the output of the previous az keyvault secret set command.
Summary
Your application files are now encrypted at rest in your storage account. When your function app starts, it
retrieves the SAS URL from your key vault. Finally, the function app loads the application files from the storage
account.
If you need to revoke the function app's access to your storage account, you can either revoke access to the key
vault or rotate the storage account keys, which invalidates the SAS URL.
Next steps
Key Vault references for App Service
Azure Storage encryption for data at rest
Store unstructured data using Azure Functions and
Azure Cosmos DB
8/2/2022 • 6 minutes to read • Edit Online
Azure Cosmos DB is a great way to store unstructured and JSON data. Combined with Azure Functions, Cosmos
DB makes storing data quick and easy with much less code than required for storing data in a relational
database.
NOTE
At this time, the Azure Cosmos DB trigger, input bindings, and output bindings work with SQL API and Graph API
accounts only.
In Azure Functions, input and output bindings provide a declarative way to connect to external service data from
your function. In this article, learn how to update an existing function to add an output binding that stores
unstructured data in an Azure Cosmos DB document.
Prerequisites
To complete this tutorial:
This topic uses as its starting point the resources created in Create your first function from the Azure portal. If
you haven't already done so, please complete these steps now to create your function app.
Location The region closest to your users Select a geographic location to host
your Azure Cosmos DB account.
Use the location that is closest to
your users to give them the fastest
access to the data.
Apply Azure Cosmos DB free tier Apply or Do not apply With Azure Cosmos DB free tier,
discount you'll get the first 1000 RU/s and 25
GB of storage for free in an account.
Learn more about free tier.
NOTE
You can have up to one free tier Azure Cosmos DB account per Azure subscription and must opt-in when creating
the account. If you do not see the option to apply the free tier discount, this means another account in the
subscription has already been enabled with free tier.
5. In the Global Distribution tab, configure the following details. You can leave the default values for this
quickstart:
SET T IN G VA L UE DESC RIP T IO N
NOTE
The following options are not available if you select Ser verless as the Capacity mode :
Apply Free Tier Discount
Geo-redundancy
Multi-region Writes
If true, creates the Cosmos DB Yes The collection doesn't already exist,
database and collection so create it.
Cosmos DB account connection New setting Select New , then choose Azure
Cosmos DB Account and the
Database account you created
earlier, and then select OK . Creates
an application setting for your
account connection. This setting is
used by the binding to connection
to the database.
C#
JavaScript
using Microsoft.AspNetCore.Mvc;
using Microsoft.AspNetCore.Http;
using Microsoft.Extensions.Logging;
public static IActionResult Run(HttpRequest req, out object taskDocument, ILogger log)
{
string name = req.Query["name"];
string task = req.Query["task"];
string duedate = req.Query["duedate"];
This code sample reads the HTTP Request query strings and assigns them to fields in the taskDocument object.
The taskDocument binding sends the object data from this binding parameter to be stored in the bound
document database. The database is created the first time the function runs.
You've successfully added a binding to your HTTP trigger to store unstructured data in an Azure Cosmos DB.
Clean up resources
In the preceding steps, you created Azure resources in a resource group. If you don't expect to need these
resources in the future, you can delete them by deleting the resource group.
From the Azure portal menu or Home page, select Resource groups . Then, on the Resource groups page,
select myResourceGroup .
On the myResourceGroup page, make sure that the listed resources are the ones you want to delete.
Select Delete resource group , type myResourceGroup in the text box to confirm, and then select Delete .
Next steps
For more information about binding to a Cosmos DB database, see Azure Functions Cosmos DB bindings.
Azure Functions triggers and bindings concepts
Learn how Functions integrates with other services.
Azure Functions developer reference
Provides more technical information about the Functions runtime and a reference for coding functions and
defining triggers and bindings.
Code and test Azure Functions locally
Describes the options for developing your functions locally.
Add messages to an Azure Storage queue using
Functions
8/2/2022 • 5 minutes to read • Edit Online
In Azure Functions, input and output bindings provide a declarative way to make data from external services
available to your code. In this quickstart, you use an output binding to create a message in a queue when a
function is triggered by an HTTP request. You use Azure storage container to view the queue messages that your
function creates.
Prerequisites
To complete this quickstart:
An Azure subscription. If you don't have one, create a free account before you begin.
Follow the directions in Create your first function from the Azure portal and don't do the Clean up
resources step. That quickstart creates the function app and function that you use here.
4. Select the Azure Queue Storage binding type, and add the settings as specified in the table that follows
this screenshot:
SET T IN G SUGGEST ED VA L UE DESC RIP T IO N
Storage account connection AzureWebJobsStorage You can use the storage account
connection already being used by
your function app, or create a new
one.
Add an outputQueueItem parameter to the method signature as shown in the following example.
In the body of the function just before the return statement, add code that uses the parameter to create
a queue message.
Notice that the Request body contains the name value Azure. This value appears in the queue message
that is created when the function is invoked.
As an alternative to selecting Run here, you can call the function by entering a URL in a browser and
specifying the name value in the query string. The browser method is shown in the previous quickstart.
3. Check the logs to make sure that the function succeeded.
A new queue named outqueue is created in your Storage account by the Functions runtime when the output
binding is first used. You'll use storage account to verify that the queue and a message in it were created.
Find the storage account connected to AzureWebJobsStorage
1. Go to your function app and select Configuration .
2. Under Application settings , select AzureWebJobsStorage .
Clean up resources
Other quickstarts in this collection build upon this quickstart. If you plan to work with subsequent quickstarts,
tutorials, or with any of the services you've created in this quickstart, don't clean up the resources.
Resources in Azure refer to function apps, functions, storage accounts, and so forth. They're grouped into
resource groups, and you can delete everything in a group by deleting the group.
You've created resources to complete these quickstarts. You might be billed for these resources, depending on
your account status and service pricing. If you don't need the resources anymore, here's how to delete them:
1. In the Azure portal, go to the Resource group page.
To get to that page from the function app page, select the Over view tab, and then select the link under
Resource group .
To get to that page from the dashboard, select Resource groups , and then select the resource group
that you used for this article.
2. In the Resource group page, review the list of included resources, and verify that they're the ones you
want to delete.
3. Select Delete resource group and follow the instructions.
Deletion might take a couple of minutes. When it's done, a notification appears for a few seconds. You can
also select the bell icon at the top of the page to view the notification.
Next steps
In this quickstart, you added an output binding to an existing function. For more information about binding to
Queue storage, see Azure Functions Storage queue bindings.
Azure Functions triggers and bindings concepts
Learn how Functions integrates with other services.
Azure Functions developer reference
Provides more technical information about the Functions runtime and a reference for coding functions and
defining triggers and bindings.
Code and test Azure Functions locally
Describes the options for developing your functions locally.
Connect Azure Functions to Azure Storage using
Visual Studio Code
8/2/2022 • 16 minutes to read • Edit Online
Azure Functions lets you connect Azure services and other resources to functions without having to write your
own integration code. These bindings, which represent both input and output, are declared within the function
definition. Data from bindings is provided to the function as parameters. A trigger is a special type of input
binding. Although a function has only one trigger, it can have multiple input and output bindings. To learn more,
see Azure Functions triggers and bindings concepts.
In this article, you learn how to use Visual Studio Code to connect Azure Storage to the function you created in
the previous quickstart article. The output binding that you add to this function writes data from the HTTP
request to a message in an Azure Queue storage queue.
Most bindings require a stored connection string that Functions uses to access the bound service. To make it
easier, you use the storage account that you created with your function app. The connection to this account is
already stored in an app setting named AzureWebJobsStorage .
IMPORTANT
Because the local.settings.json file contains secrets, it never gets published, and is excluded from the source
control.
3. Copy the value AzureWebJobsStorage , which is the key for the storage account connection string value.
You use this connection to verify that the output binding works as expected.
{
"version": "2.0",
"extensionBundle": {
"id": "Microsoft.Azure.Functions.ExtensionBundle",
"version": "[1.*, 2.0.0)"
}
}
Now, you can add the storage output binding to your project.
Except for HTTP and timer triggers, bindings are implemented as extension packages. Run the following dotnet
add package command in the Terminal window to add the Storage extension package to your project.
In-process
Isolated process
Now, you can add the storage output binding to your project.
Select binding with direction... Azure Queue Storage The binding is an Azure Storage queue
binding.
The name used to identify this msg Name that identifies the binding
binding in your code parameter referenced in your code.
The queue to which the message outqueue The name of the queue that the
will be sent binding writes to. When the
queueName doesn't exist, the binding
creates it on first use.
A binding is added to the bindings array in your function.json, which should look like the following:
{
"type": "queue",
"direction": "out",
"name": "msg",
"queueName": "outqueue",
"connection": "AzureWebJobsStorage"
}
In a C# project, the bindings are defined as binding attributes on the function method. Specific definitions
depend on whether your app runs in-process (C# class library) or in an isolated process.
In-process
Isolated process
Open the HttpExample.cs project file and add the following parameter to the Run method definition:
The msg parameter is an ICollector<T> type, representing a collection of messages written to an output
binding when the function completes. In this case, the output is a storage queue named outqueue . The
StorageAccountAttribute sets the connection string for the storage account. This attribute indicates the setting
that contains the storage account connection string and can be applied at the class, method, or parameter level.
In this case, you could omit StorageAccountAttribute because you're already using the default storage account.
The Run method definition must now look like the following code:
[FunctionName("HttpExample")]
public static async Task<IActionResult> Run(
[HttpTrigger(AuthorizationLevel.Anonymous, "get", "post", Route = null)] HttpRequest req,
[Queue("outqueue"),StorageAccount("AzureWebJobsStorage")] ICollector<string> msg,
ILogger log)
In a Java project, the bindings are defined as binding annotations on the function method. The function.json file
is then autogenerated based on these annotations.
Browse to the location of your function code under src/main/java, open the Function.java project file, and add
the following parameter to the run method definition:
The msg parameter is an OutputBinding<T> type, which represents a collection of strings that are written as
messages to an output binding when the function completes. In this case, the output is a storage queue named
outqueue . The connection string for the Storage account is set by the connection method. Rather than the
connection string itself, you pass the application setting that contains the Storage account connection string.
The run method definition should now look like the following example:
@FunctionName("HttpExample")
public HttpResponseMessage run(
@HttpTrigger(name = "req", methods = {HttpMethod.GET, HttpMethod.POST}, authLevel =
AuthorizationLevel.ANONYMOUS)
HttpRequestMessage<Optional<String>> request,
@QueueOutput(name = "msg", queueName = "outqueue",
connection = "AzureWebJobsStorage") OutputBinding<String> msg,
final ExecutionContext context) {
Add code that uses the msg output binding object on context.bindings to create a queue message. Add this
code before the context.res statement.
context.bindings.msg = name;
const httpTrigger: AzureFunction = async function (context: Context, req: HttpRequest): Promise<void> {
context.log('HTTP trigger function processed a request.');
const name = (req.query.name || (req.body && req.body.name));
if (name) {
// Add a message to the storage queue,
// which is the name passed to the function.
context.bindings.msg = name;
// Send a "hello" response.
context.res = {
// status: 200, /* Defaults to 200 */
body: "Hello " + (req.query.name || req.body.name)
};
}
else {
context.res = {
status: 400,
body: "Please pass a name on the query string or in the request body"
};
}
};
Add code that uses the Push-OutputBinding cmdlet to write text to the queue using the msg output binding. Add
this code before you set the OK status in the if statement.
$outputMsg = $name
Push-OutputBinding -name msg -Value $outputMsg
if ($name) {
# Write the $name value to the queue,
# which is the name passed to the function.
$outputMsg = $name
Push-OutputBinding -name msg -Value $outputMsg
$status = [HttpStatusCode]::OK
$body = "Hello $name"
}
else {
$status = [HttpStatusCode]::BadRequest
$body = "Please pass a name on the query string or in the request body."
}
Update HttpExample\__init__.py to match the following code, add the msg parameter to the function definition
and msg.set(name) under the if name: statement:
import logging
name = req.params.get('name')
if not name:
try:
req_body = req.get_json()
except ValueError:
pass
else:
name = req_body.get('name')
if name:
msg.set(name)
return func.HttpResponse(f"Hello {name}!")
else:
return func.HttpResponse(
"Please pass a name on the query string or in the request body",
status_code=400
)
The msg parameter is an instance of the azure.functions.Out class . The set method writes a string message
to the queue. In this case, it's the name passed to the function in the URL query string.
In-process
Isolated process
Add code that uses the msg output binding object to create a queue message. Add this code before the method
returns.
if (!string.IsNullOrEmpty(name))
{
// Add a message to the output collection.
msg.Add(name);
}
[FunctionName("HttpExample")]
public static async Task<IActionResult> Run(
[HttpTrigger(AuthorizationLevel.Anonymous, "get", "post", Route = null)] HttpRequest req,
[Queue("outqueue"),StorageAccount("AzureWebJobsStorage")] ICollector<string> msg,
ILogger log)
{
log.LogInformation("C# HTTP trigger function processed a request.");
if (!string.IsNullOrEmpty(name))
{
// Add a message to the output collection.
msg.Add(name);
}
return name != null
? (ActionResult)new OkObjectResult($"Hello, {name}")
: new BadRequestObjectResult("Please pass a name on the query string or in the request body");
}
Now, you can use the new msg parameter to write to the output binding from your function code. Add the
following line of code before the success response to add the value of name to the msg output binding.
msg.setValue(name);
When you use an output binding, you don't have to use the Azure Storage SDK code for authentication, getting a
queue reference, or writing data. The Functions runtime and queue output binding do those tasks for you.
Your run method should now look like the following example:
@FunctionName("HttpExample")
public HttpResponseMessage run(
@HttpTrigger(name = "req", methods = {HttpMethod.GET, HttpMethod.POST}, authLevel =
AuthorizationLevel.ANONYMOUS)
HttpRequestMessage<Optional<String>> request,
@QueueOutput(name = "msg", queueName = "outqueue",
connection = "AzureWebJobsStorage") OutputBinding<String> msg,
final ExecutionContext context) {
context.getLogger().info("Java HTTP trigger processed a request.");
if (name == null) {
return request.createResponseBuilder(HttpStatus.BAD_REQUEST)
.body("Please pass a name on the query string or in the request body").build();
} else {
// Write the name to the message queue.
msg.setValue(name);
@SuppressWarnings("unchecked")
final OutputBinding<String> msg = (OutputBinding<String>)mock(OutputBinding.class);
final HttpResponseMessage ret = new Function().run(req, msg, context);
If you have trouble running on Windows, make sure that the default terminal for Visual Studio Code isn't
set to WSL Bash .
2. With the Core Tools running, go to the Azure: Functions area. Under Functions , expand Local Project
> Functions . Right-click (Windows) or Ctrl - click (macOS) the HttpExample function and choose
Execute Function Now....
3. In the Enter request body , press Enter to send a request message to your function.
4. When the function executes locally and returns a response, a notification is raised in Visual Studio Code.
Information about the function execution is shown in the Terminal panel.
5. Press Ctrl + C to stop Core Tools and disconnect the debugger.
2. In the Connect dialog, choose Add an Azure account , choose your Azure environment , and then
select Sign in....
After you successfully sign in to your account, you see all of the Azure subscriptions associated with your
account.
Examine the output queue
1. In Visual Studio Code, press F1 to open the command palette, then search for and run the command
Azure Storage: Open in Storage Explorer and choose your storage account name. Your storage account
opens in the Azure Storage Explorer.
2. Expand the Queues node, and then select the queue named outqueue .
The queue contains the message that the queue output binding created when you ran the HTTP-triggered
function. If you invoked the function with the default name value of Azure, the queue message is Name
passed to the function: Azure.
3. Run the function again, send another request, and you see a new message in the queue.
Now, it's time to republish the updated function app to Azure.
Clean up resources
In Azure, resources refer to function apps, functions, storage accounts, and so forth. They're grouped into
resource groups, and you can delete everything in a group by deleting the group.
You've created resources to complete these quickstarts. You may be billed for these resources, depending on
your account status and service pricing. If you don't need the resources anymore, here's how to delete them:
1. In Visual Studio Code, press F1 to open the command palette. In the command palette, search for and
select Azure: Open in portal .
2. Choose your function app and press Enter. The function app page opens in the Azure portal.
3. In the Over view tab, select the named link next to Resource group .
4. On the Resource group page, review the list of included resources, and verify that they're the ones you
want to delete.
5. In the Resource group page, review the list of included resources, and verify that they are the ones you
want to delete.
6. Select Delete resource group , and follow the instructions.
Deletion may take a couple of minutes. When it's done, a notification appears for a few seconds. You can
also select the bell icon at the top of the page to view the notification.
Next steps
You've updated your HTTP triggered function to write data to a Storage queue. Now you can learn more about
developing Functions using Visual Studio Code:
Develop Azure Functions using Visual Studio Code
Azure Functions triggers and bindings.
Examples of complete Function projects in C#.
Azure Functions C# developer reference
Examples of complete Function projects in JavaScript.
Azure Functions JavaScript developer guide
Examples of complete Function projects in Java.
Azure Functions Java developer guide
Examples of complete Function projects in TypeScript.
Azure Functions TypeScript developer guide
Examples of complete Function projects in Python.
Azure Functions Python developer guide
Examples of complete Function projects in PowerShell.
Azure Functions PowerShell developer guide
Connect functions to Azure Storage using Visual
Studio
8/2/2022 • 7 minutes to read • Edit Online
Azure Functions lets you connect Azure services and other resources to functions without having to write your
own integration code. These bindings, which represent both input and output, are declared within the function
definition. Data from bindings is provided to the function as parameters. A trigger is a special type of input
binding. Although a function has only one trigger, it can have multiple input and output bindings. To learn more,
see Azure Functions triggers and bindings concepts.
This article shows you how to use Visual Studio to connect the function you created in the previous quickstart
article to Azure Storage. The output binding that you add to this function writes data from the HTTP request to a
message in an Azure Queue storage queue.
Most bindings require a stored connection string that Functions uses to access the bound service. To make it
easier, you use the Storage account that you created with your function app. The connection to this account is
already stored in an app setting named AzureWebJobsStorage .
Prerequisites
Before you start this article, you must:
Complete part 1 of the Visual Studio quickstart.
Sign in to your Azure subscription from Visual Studio.
Install-Package Microsoft.Azure.WebJobs.Extensions.Storage
Now, you can add the storage output binding to your project.
Open the HttpExample.cs project file and add the following parameter to the Run method definition:
[FunctionName("HttpExample")]
public static async Task<IActionResult> Run(
[HttpTrigger(AuthorizationLevel.Anonymous, "get", "post", Route = null)] HttpRequest req,
[Queue("outqueue"),StorageAccount("AzureWebJobsStorage")] ICollector<string> msg,
ILogger log)
In-process
Isolated process
Add code that uses the msg output binding object to create a queue message. Add this code before the method
returns.
if (!string.IsNullOrEmpty(name))
{
// Add a message to the output collection.
msg.Add(name);
}
[FunctionName("HttpExample")]
public static async Task<IActionResult> Run(
[HttpTrigger(AuthorizationLevel.Anonymous, "get", "post", Route = null)] HttpRequest req,
[Queue("outqueue"),StorageAccount("AzureWebJobsStorage")] ICollector<string> msg,
ILogger log)
{
log.LogInformation("C# HTTP trigger function processed a request.");
if (!string.IsNullOrEmpty(name))
{
// Add a message to the output collection.
msg.Add(name);
}
return name != null
? (ActionResult)new OkObjectResult($"Hello, {name}")
: new BadRequestObjectResult("Please pass a name on the query string or in the request body");
}
Run the function locally
1. To run your function, press F5 in Visual Studio. You might need to enable a firewall exception so that the
tools can handle HTTP requests. Authorization levels are never enforced when you run a function locally.
2. Copy the URL of your function from the Azure Functions runtime output.
3. Paste the URL for the HTTP request into your browser's address bar. Append the query string
?name=<YOUR_NAME> to this URL and run the request. The following image shows the response in the
browser to the local GET request returned by the function:
3. Expand the Queues node, and then double-click the queue named outqueue to view the contents of the
queue in Visual Studio.
The queue contains the message that the queue output binding created when you ran the HTTP-triggered
function. If you invoked the function with the default name value of Azure, the queue message is Name
passed to the function: Azure.
4. Run the function again, send another request, and you'll see a new message appear in the queue.
Now, it's time to republish the updated function app to Azure.
Clean up resources
Other quickstarts in this collection build upon this quickstart. If you plan to work with subsequent quickstarts,
tutorials, or with any of the services you've created in this quickstart, don't clean up the resources.
Resources in Azure refer to function apps, functions, storage accounts, and so forth. They're grouped into
resource groups, and you can delete everything in a group by deleting the group.
You've created resources to complete these quickstarts. You might be billed for these resources, depending on
your account status and service pricing. If you don't need the resources anymore, here's how to delete them:
1. In the Azure portal, go to the Resource group page.
To get to that page from the function app page, select the Over view tab, and then select the link under
Resource group .
To get to that page from the dashboard, select Resource groups , and then select the resource group
that you used for this article.
2. In the Resource group page, review the list of included resources, and verify that they're the ones you
want to delete.
3. Select Delete resource group and follow the instructions.
Deletion might take a couple of minutes. When it's done, a notification appears for a few seconds. You can
also select the bell icon at the top of the page to view the notification.
Next steps
You've updated your HTTP triggered function to write data to a Storage queue. To learn more about developing
Functions, see Develop Azure Functions using Visual Studio.
Next, you should enable Application Insights monitoring for your function app:
Enable Application Insights integration
Connect Azure Functions to Azure Storage using
command line tools
8/2/2022 • 17 minutes to read • Edit Online
In this article, you integrate an Azure Storage queue with the function and storage account you created in the
previous quickstart article. You achieve this integration by using an output binding that writes data from an
HTTP request to a message in the queue. Completing this article incurs no additional costs beyond the few USD
cents of the previous quickstart. To learn more about bindings, see Azure Functions triggers and bindings
concepts.
2. Open local.settings.json file and locate the value named AzureWebJobsStorage , which is the Storage
account connection string. You use the name AzureWebJobsStorage and the connection string in other
sections of this article.
IMPORTANT
Because the local.settings.json file contains secrets downloaded from Azure, always exclude this file from source control.
The .gitignore file created with a local functions project excludes the file by default.
Now, you can add the storage output binding to your project.
"bindings": [
{
"authLevel": "function",
"type": "httpTrigger",
"direction": "in",
"name": "req",
"methods": [
"get",
"post"
]
},
{
"type": "http",
"direction": "out",
"name": "res"
}
]
"scriptFile": "__init__.py",
"bindings": [
{
"authLevel": "function",
"type": "httpTrigger",
"direction": "in",
"name": "req",
"methods": [
"get",
"post"
]
},
{
"type": "http",
"direction": "out",
"name": "$return"
}
"bindings": [
{
"authLevel": "function",
"type": "httpTrigger",
"direction": "in",
"name": "Request",
"methods": [
"get",
"post"
]
},
{
"type": "http",
"direction": "out",
"name": "Response"
}
]
Each binding has at least a type, a direction, and a name. In the above example, the first binding is of type
httpTrigger with the direction in . For the in direction, name specifies the name of an input parameter that's
sent to the function when invoked by the trigger.
The second binding in the collection is named res . This http binding is an output binding ( out ) that is used
to write the HTTP response.
To write to an Azure Storage queue from this function, add an out binding of type queue with the name msg ,
as shown in the code below:
{
"authLevel": "function",
"type": "httpTrigger",
"direction": "in",
"name": "req",
"methods": [
"get",
"post"
]
},
{
"type": "http",
"direction": "out",
"name": "res"
},
{
"type": "queue",
"direction": "out",
"name": "msg",
"queueName": "outqueue",
"connection": "AzureWebJobsStorage"
}
]
}
The second binding in the collection is of type http with the direction out , in which case the special name of
$return indicates that this binding uses the function's return value rather than providing an input parameter.
To write to an Azure Storage queue from this function, add an out binding of type queue with the name msg ,
as shown in the code below:
"bindings": [
{
"authLevel": "anonymous",
"type": "httpTrigger",
"direction": "in",
"name": "req",
"methods": [
"get",
"post"
]
},
{
"type": "http",
"direction": "out",
"name": "$return"
},
{
"type": "queue",
"direction": "out",
"name": "msg",
"queueName": "outqueue",
"connection": "AzureWebJobsStorage"
}
]
The second binding in the collection is named res . This http binding is an output binding ( out ) that is used
to write the HTTP response.
To write to an Azure Storage queue from this function, add an out binding of type queue with the name msg ,
as shown in the code below:
{
"authLevel": "function",
"type": "httpTrigger",
"direction": "in",
"name": "Request",
"methods": [
"get",
"post"
]
},
{
"type": "http",
"direction": "out",
"name": "Response"
},
{
"type": "queue",
"direction": "out",
"name": "msg",
"queueName": "outqueue",
"connection": "AzureWebJobsStorage"
}
]
}
In this case, msg is given to the function as an output argument. For a queue type, you must also specify the
name of the queue in queueName and provide the name of the Azure Storage connection (from local.settings.json
file) in connection .
In a C# project, the bindings are defined as binding attributes on the function method. Specific definitions
depend on whether your app runs in-process (C# class library) or in an isolated process.
In-process
Isolated process
Open the HttpExample.cs project file and add the following parameter to the Run method definition:
The msg parameter is an ICollector<T> type, representing a collection of messages written to an output
binding when the function completes. In this case, the output is a storage queue named outqueue . The
StorageAccountAttribute sets the connection string for the storage account. This attribute indicates the setting
that contains the storage account connection string and can be applied at the class, method, or parameter level.
In this case, you could omit StorageAccountAttribute because you're already using the default storage account.
The Run method definition must now look like the following code:
[FunctionName("HttpExample")]
public static async Task<IActionResult> Run(
[HttpTrigger(AuthorizationLevel.Anonymous, "get", "post", Route = null)] HttpRequest req,
[Queue("outqueue"),StorageAccount("AzureWebJobsStorage")] ICollector<string> msg,
ILogger log)
In a Java project, the bindings are defined as binding annotations on the function method. The function.json file
is then autogenerated based on these annotations.
Browse to the location of your function code under src/main/java, open the Function.java project file, and add
the following parameter to the run method definition:
@QueueOutput(name = "msg", queueName = "outqueue", connection = "AzureWebJobsStorage") OutputBinding<String>
msg
The msg parameter is an OutputBinding<T> type, which represents a collection of strings. These strings are
written as messages to an output binding when the function completes. In this case, the output is a storage
queue named outqueue . The connection string for the Storage account is set by the connection method. You
pass the application setting that contains the Storage account connection string, rather than passing the
connection string itself.
The run method definition must now look like the following example:
@FunctionName("HttpTrigger-Java")
public HttpResponseMessage run(
@HttpTrigger(name = "req", methods = {HttpMethod.GET, HttpMethod.POST}, authLevel =
AuthorizationLevel.FUNCTION)
HttpRequestMessage<Optional<String>> request,
@QueueOutput(name = "msg", queueName = "outqueue", connection = "AzureWebJobsStorage")
OutputBinding<String> msg, final ExecutionContext context) {
...
}
For more information on the details of bindings, see Azure Functions triggers and bindings concepts and queue
output configuration.
import logging
name = req.params.get('name')
if not name:
try:
req_body = req.get_json()
except ValueError:
pass
else:
name = req_body.get('name')
if name:
msg.set(name)
return func.HttpResponse(f"Hello {name}!")
else:
return func.HttpResponse(
"Please pass a name on the query string or in the request body",
status_code=400
)
The msg parameter is an instance of the azure.functions.Out class . The set method writes a string message
to the queue. In this case, it's the name passed to the function in the URL query string.
Add code that uses the msg output binding object on context.bindings to create a queue message. Add this
code before the context.res statement.
Add code that uses the msg output binding object on context.bindings to create a queue message. Add this
code before the context.res statement.
context.bindings.msg = name;
const httpTrigger: AzureFunction = async function (context: Context, req: HttpRequest): Promise<void> {
context.log('HTTP trigger function processed a request.');
const name = (req.query.name || (req.body && req.body.name));
if (name) {
// Add a message to the storage queue,
// which is the name passed to the function.
context.bindings.msg = name;
// Send a "hello" response.
context.res = {
// status: 200, /* Defaults to 200 */
body: "Hello " + (req.query.name || req.body.name)
};
}
else {
context.res = {
status: 400,
body: "Please pass a name on the query string or in the request body"
};
}
};
Add code that uses the Push-OutputBinding cmdlet to write text to the queue using the msg output binding. Add
this code before you set the OK status in the if statement.
$outputMsg = $name
Push-OutputBinding -name msg -Value $outputMsg
if ($name) {
# Write the $name value to the queue,
# which is the name passed to the function.
$outputMsg = $name
Push-OutputBinding -name msg -Value $outputMsg
$status = [HttpStatusCode]::OK
$body = "Hello $name"
}
else {
$status = [HttpStatusCode]::BadRequest
$body = "Please pass a name on the query string or in the request body."
}
In-process
Isolated process
Add code that uses the msg output binding object to create a queue message. Add this code before the method
returns.
if (!string.IsNullOrEmpty(name))
{
// Add a message to the output collection.
msg.Add(name);
}
if (!string.IsNullOrEmpty(name))
{
// Add a message to the output collection.
msg.Add(name);
}
return name != null
? (ActionResult)new OkObjectResult($"Hello, {name}")
: new BadRequestObjectResult("Please pass a name on the query string or in the request body");
}
Now, you can use the new msg parameter to write to the output binding from your function code. Add the
following line of code before the success response to add the value of name to the msg output binding.
msg.setValue(name);
When you use an output binding, you don't have to use the Azure Storage SDK code for authentication, getting a
queue reference, or writing data. The Functions runtime and queue output binding do those tasks for you.
Your run method must now look like the following example:
if (name == null) {
return request.createResponseBuilder(HttpStatus.BAD_REQUEST)
.body("Please pass a name on the query string or in the request body").build();
} else {
// Write the name to the message queue.
msg.setValue(name);
@SuppressWarnings("unchecked")
final OutputBinding<String> msg = (OutputBinding<String>)mock(OutputBinding.class);
final HttpResponseMessage ret = new Function().run(req, msg, context);
Observe that you don't need to write any code for authentication, getting a queue reference, or writing data. All
these integration tasks are conveniently handled in the Azure Functions runtime and queue output binding.
func start
Toward the end of the output, the following lines must appear:
...
Http Functions:
NOTE
If HttpExample doesn't appear as shown above, you likely started the host from outside the root folder of the
project. In that case, use Ctrl+C to stop the host, go to the project's root folder, and run the previous command
again.
2. Copy the URL of your HttpExample function from this output to a browser and append the query string
?name=<YOUR_NAME> , making the full URL like http://localhost:7071/api/HttpExample?name=Functions . The
browser should display a response message that echoes back your query string value. The terminal in
which you started your project also shows log output as you make requests.
3. When you're done, press Ctrl + C and type y to stop the functions host.
TIP
During startup, the host downloads and installs the Storage binding extension and other Microsoft binding extensions.
This installation happens because binding extensions are enabled by default in the host.json file with the following
properties:
{
"version": "2.0",
"extensionBundle": {
"id": "Microsoft.Azure.Functions.ExtensionBundle",
"version": "[1.*, 2.0.0)"
}
}
If you encounter any errors related to binding extensions, check that the above properties are present in host.json.
bash
PowerShell
Azure CLI
export AZURE_STORAGE_CONNECTION_STRING="<MY_CONNECTION_STRING>"
2. (Optional) Use the az storage queue list command to view the Storage queues in your account. The
output from this command must include a queue named outqueue , which was created when the function
wrote its first message to that queue.
3. Use the az storage message get command to read the message from this queue, which should be the
value you supplied when testing the function earlier. The command reads and removes the first message
from the queue.
bash
PowerShell
Azure CLI
echo `echo $(az storage message get --queue-name outqueue -o tsv --query '[].{Message:content}') |
base64 --decode`
Because the message body is stored base64 encoded, the message must be decoded before it's displayed.
After you execute az storage message get , the message is removed from the queue. If there was only one
message in outqueue , you won't retrieve a message when you run this command a second time and
instead get an error.
In the local project folder, use the following Maven command to republish your project:
mvn azure-functions:deploy
Verify in Azure
1. As in the previous quickstart, use a browser or CURL to test the redeployed function.
Browser
curl
Copy the complete Invoke URL shown in the output of the publish command into a browser address
bar, appending the query parameter &name=Functions . The browser should display the same output as
when you ran the function locally.
2. Examine the Storage queue again, as described in the previous section, to verify that it contains the new
message written to the queue.
Clean up resources
After you've finished, use the following command to delete the resource group and all its contained resources to
avoid incurring further costs.
Next steps
You've updated your HTTP triggered function to write data to a Storage queue. Now you can learn more about
developing Functions from the command line using Core Tools and Azure CLI:
Work with Azure Functions Core Tools
Azure Functions triggers and bindings
Examples of complete Function projects in C#.
Azure Functions C# developer reference
Examples of complete Function projects in JavaScript.
Azure Functions JavaScript developer guide
Examples of complete Function projects in TypeScript.
Azure Functions TypeScript developer guide
Examples of complete Function projects in Python.
Azure Functions Python developer guide
Examples of complete Function projects in PowerShell.
Azure Functions PowerShell developer guide
Quickstart: Create an app showing GitHub star
count with Azure Functions and SignalR Service via
C#
8/2/2022 • 6 minutes to read • Edit Online
In this article, you'll learn how to use SignalR Service and Azure Functions to build a serverless application with
C# to broadcast messages to clients.
NOTE
You can get the code mentioned in this article from GitHub.
Prerequisites
The following prerequisites are needed for this quickstart:
Visual Studio Code, or other code editor. If you don't already have Visual Studio Code installed, download
Visual Studio Code here.
An Azure subscription. If you don't have an Azure subscription, create one for free before you begin.
Azure Functions Core Tools
.NET Core SDK
Resource group Create a resource group named Select or create a resource group for
SignalRTestResources your SignalR resource. It's useful to
create a new resource group for this
tutorial instead of using an existing
resource group. To free resources after
completing the tutorial, delete the
resource group.
Region Choose your region Select the appropriate region for your
new SignalR Service instance.
Pricing tier Select Change and then choose Free Azure SignalR Service has three pricing
(Dev/Test Only) . Choose Select to tiers: Free, Standard, and Premium.
confirm your choice of pricing tier. Tutorials use the Free tier, unless
noted otherwise in the prerequisites.
Ser vice mode Choose the appropriate service mode Use Default for ASP.NET. Use
for this tutorial Ser verless for Azure Functions or
REST API.
3. Using your code editor, create a new file with the name Function.cs. Add the following code to Function.cs:
using System;
using System.IO;
using System.Linq;
using System.Net.Http;
using System.Threading.Tasks;
using Microsoft.AspNetCore.Http;
using Microsoft.AspNetCore.Mvc;
using Microsoft.Azure.WebJobs;
using Microsoft.Azure.WebJobs.Extensions.Http;
using Microsoft.Azure.WebJobs.Extensions.SignalRService;
using Newtonsoft.Json;
namespace CSharp
{
public static class Function
{
private static HttpClient httpClient = new HttpClient();
private static string Etag = string.Empty;
private static string StarCount = "0";
[FunctionName("index")]
public static IActionResult
GetHomePage([HttpTrigger(AuthorizationLevel.Anonymous)]HttpRequest req, ExecutionContext context)
{
var path = Path.Combine(context.FunctionAppDirectory, "content", "index.html");
return new ContentResult
{
Content = File.ReadAllText(path),
ContentType = "text/html",
};
}
[FunctionName("negotiate")]
public static SignalRConnectionInfo Negotiate(
[HttpTrigger(AuthorizationLevel.Anonymous)] HttpRequest req,
[SignalRConnectionInfo(HubName = "serverless")] SignalRConnectionInfo connectionInfo)
{
{
return connectionInfo;
}
[FunctionName("broadcast")]
public static async Task Broadcast([TimerTrigger("*/5 * * * * *")] TimerInfo myTimer,
[SignalR(HubName = "serverless")] IAsyncCollector<SignalRMessage> signalRMessages)
{
var request = new HttpRequestMessage(HttpMethod.Get,
"https://api.github.com/repos/azure/azure-signalr");
request.Headers.UserAgent.ParseAdd("Serverless");
request.Headers.Add("If-None-Match", Etag);
var response = await httpClient.SendAsync(request);
if (response.Headers.Contains("Etag"))
{
Etag = response.Headers.GetValues("Etag").First();
}
if (response.StatusCode == System.Net.HttpStatusCode.OK)
{
var result = JsonConvert.DeserializeObject<GitResult>(await
response.Content.ReadAsStringAsync());
StarCount = result.StarCount;
}
await signalRMessages.AddAsync(
new SignalRMessage
{
Target = "newMessage",
Arguments = new[] { $"Current star count of https://github.com/Azure/azure-
signalr is: {StarCount}" }
});
}
<body>
<h1>Azure SignalR Serverless Sample</h1>
<div id="messages"></div>
<script src="https://cdnjs.cloudflare.com/ajax/libs/microsoft-signalr/3.1.7/signalr.min.js">
</script>
<script>
let messages = document.querySelector('#messages');
const apiBaseUrl = window.location.origin;
const connection = new signalR.HubConnectionBuilder()
.withUrl(apiBaseUrl + '/api')
.configureLogging(signalR.LogLevel.Information)
.build();
connection.on('newMessage', (message) => {
document.getElementById("messages").innerHTML = message;
});
connection.start()
.catch(console.error);
</script>
</body>
</html>
5. Update your *.csproj to make the content page in the build output folder.
<ItemGroup>
<None Update="content/index.html">
<CopyToOutputDirectory>PreserveNewest</CopyToOutputDirectory>
</None>
</ItemGroup>
6. It's almost done now. The last step is to set a connection string of the SignalR Service to Azure Function
settings.
a. Confirm the SignalR Service instance was successfully created by searching for its name in the
search box at the top of the portal. Select the instance to open it.
b. Select Keys to view the connection strings for the SignalR Service instance.
c. Copy the primary connection string, and then run the following command:
func start
After the Azure function is running locally, open http://localhost:7071/api/index and you can see the
current star count. If you star or unstar in the GitHub, you'll get a star count refreshing every few seconds.
NOTE
SignalR binding needs Azure Storage, but you can use a local storage emulator when the function is running
locally. If you got the error
There was an error performing a read operation on the Blob Storage Secret Repository. Please
ensure the 'AzureWebJobsStorage' connection string is valid.
You need to download and enable Storage Emulator
Clean up resources
If you're not going to continue to use this app, delete all resources created by this quickstart with the following
steps so you don't incur any charges:
1. In the Azure portal, select Resource groups on the far left, and then select the resource group you
created. Alternatively, you may use the search box to find the resource group by its name.
2. In the window that opens, select the resource group, and then click Delete resource group .
3. In the new window, type the name of the resource group to delete, and then click Delete .
Having issues? Try the troubleshooting guide or let us know.
Next steps
In this quickstart, you built and ran a real-time serverless application locally. Next, learn more about bi-
directional communication between clients and Azure Functions with Azure SignalR Service.
SignalR Service bindings for Azure Functions
Azure Functions Bi-directional communicating sample
Azure Functions Bi-directional communicating sample for isolated process
Deploy to Azure Function App using Visual Studio
Quickstart: Use Java to create an App showing
GitHub star count with Azure Functions and SignalR
Service
8/2/2022 • 7 minutes to read • Edit Online
In this article, you'll use Azure SignalR Service, Azure Functions, and Java to build a serverless application to
broadcast messages to clients.
NOTE
The code in this article is available on GitHub.
Prerequisites
A code editor, such as Visual Studio Code
An Azure account with an active subscription. If you don't already have an account, create an account for
free.
Azure Functions Core Tools. Used to run Azure Function apps locally.
The required SignalR Service bindings in Java are only supported in Azure Function Core Tools
version 2.4.419 (host version 2.0.12332) or above.
To install extensions, Azure Functions Core Tools requires the .NET Core SDK installed. However, no
knowledge of .NET is required to build Java Azure Function apps.
Java Developer Kit, version 11
Apache Maven, version 3.0 or above.
This quickstart can be run on macOS, Windows, or Linux.
Resource group Create a resource group named Select or create a resource group for
SignalRTestResources your SignalR resource. It's useful to
create a new resource group for this
tutorial instead of using an existing
resource group. To free resources after
completing the tutorial, delete the
resource group.
Region Choose your region Select the appropriate region for your
new SignalR Service instance.
Pricing tier Select Change and then choose Free Azure SignalR Service has three pricing
(Dev/Test Only) . Choose Select to tiers: Free, Standard, and Premium.
confirm your choice of pricing tier. Tutorials use the Free tier, unless
noted otherwise in the prerequisites.
Ser vice mode Choose the appropriate service mode Use Default for ASP.NET. Use
for this tutorial Ser verless for Azure Functions or
REST API.
You don't need to change the settings on the Networking and Tags tabs for the SignalR tutorials.
6. Select the Review + create button at the bottom of the Basics tab.
7. On the Review + create tab, review the values and then select Create . It will take a few moments for
deployment to complete.
8. When the deployment is complete, select the Go to resource button.
9. On the SignalR resource screen, select Keys from the menu on the left, under Settings .
10. Copy the Connection string for the primary key. You'll need this connection string to configure your app
later in this tutorial.
Maven asks you for values needed to finish generating the project. Provide the following values:
P RO M P T VA L UE DESC RIP T IO N
package com.signalr;
import com.google.gson.Gson;
import com.microsoft.azure.functions.ExecutionContext;
import com.microsoft.azure.functions.HttpMethod;
import com.microsoft.azure.functions.HttpRequestMessage;
import com.microsoft.azure.functions.HttpResponseMessage;
import com.microsoft.azure.functions.HttpResponseMessage;
import com.microsoft.azure.functions.HttpStatus;
import com.microsoft.azure.functions.annotation.AuthorizationLevel;
import com.microsoft.azure.functions.annotation.FunctionName;
import com.microsoft.azure.functions.annotation.HttpTrigger;
import com.microsoft.azure.functions.annotation.TimerTrigger;
import com.microsoft.azure.functions.signalr.*;
import com.microsoft.azure.functions.signalr.annotation.*;
import org.apache.commons.io.IOUtils;
import java.io.IOException;
import java.io.InputStream;
import java.net.URI;
import java.net.http.HttpClient;
import java.net.http.HttpRequest;
import java.net.http.HttpResponse;
import java.net.http.HttpResponse.BodyHandlers;
import java.nio.charset.StandardCharsets;
import java.util.Optional;
@FunctionName("index")
public HttpResponseMessage run(
@HttpTrigger(
name = "req",
methods = {HttpMethod.GET},
authLevel = AuthorizationLevel.ANONYMOUS)HttpRequestMessage<Optional<String>>
request,
final ExecutionContext context) throws IOException {
InputStream inputStream =
getClass().getClassLoader().getResourceAsStream("content/index.html");
String text = IOUtils.toString(inputStream, StandardCharsets.UTF_8.name());
return request.createResponseBuilder(HttpStatus.OK).header("Content-Type",
"text/html").body(text).build();
}
@FunctionName("negotiate")
public SignalRConnectionInfo negotiate(
@HttpTrigger(
name = "req",
methods = { HttpMethod.POST },
authLevel = AuthorizationLevel.ANONYMOUS) HttpRequestMessage<Optional<String>> req,
@SignalRConnectionInfoInput(
name = "connectionInfo",
hubName = "serverless") SignalRConnectionInfo connectionInfo) {
return connectionInfo;
}
@FunctionName("broadcast")
@SignalROutput(name = "$return", hubName = "serverless")
public SignalRMessage broadcast(
@TimerTrigger(name = "timeTrigger", schedule = "*/5 * * * * *") String timerInfo) throws
IOException, InterruptedException {
HttpClient client = HttpClient.newHttpClient();
HttpRequest req =
HttpRequest.newBuilder().uri(URI.create("https://api.github.com/repos/azure/azure-
signalr")).header("User-Agent", "serverless").header("If-None-Match", Etag).build();
HttpResponse<String> res = client.send(req, BodyHandlers.ofString());
if (res.headers().firstValue("Etag").isPresent())
{
Etag = res.headers().firstValue("Etag").get();
}
if (res.statusCode() == 200)
if (res.statusCode() == 200)
{
Gson gson = new Gson();
GitResult result = gson.fromJson(res.body(), GitResult.class);
StarCount = result.stargazers_count;
}
class GitResult {
public String stargazers_count;
}
}
3. Some dependencies need to be added. Open pom.xml and add the following dependencies used in the
code:
<dependency>
<groupId>com.microsoft.azure.functions</groupId>
<artifactId>azure-functions-java-library-signalr</artifactId>
<version>1.0.0</version>
</dependency>
<dependency>
<groupId>commons-io</groupId>
<artifactId>commons-io</artifactId>
<version>2.4</version>
</dependency>
<dependency>
<groupId>com.google.code.gson</groupId>
<artifactId>gson</artifactId>
<version>2.8.7</version>
</dependency>
4. The client interface for this sample is a web page. We read HTML content from content/index.html in the
index function, and then create a new file content/index.html in the resources directory. Your directory
tree should look like this:
| - src
| | - main
| | | - java
| | | | - com
| | | | | - signalr
| | | | | | - Function.java
| | | - resources
| | | | - content
| | | | | - index.html
| - pom.xml
| - host.json
| - local.settings.json
<body>
<h1>Azure SignalR Serverless Sample</h1>
<div id="messages"></div>
<script src="https://cdnjs.cloudflare.com/ajax/libs/microsoft-signalr/3.1.7/signalr.min.js">
</script>
<script>
let messages = document.querySelector('#messages');
const apiBaseUrl = window.location.origin;
const connection = new signalR.HubConnectionBuilder()
.withUrl(apiBaseUrl + '/api')
.configureLogging(signalR.LogLevel.Information)
.build();
connection.on('newMessage', (message) => {
document.getElementById("messages").innerHTML = message;
});
connection.start()
.catch(console.error);
</script>
</body>
</html>
6. You're almost done now. The last step is to set a connection string of the SignalR Service to Azure
Function settings.
a. Search for the Azure SignalR instance you deployed earlier using the Search box in Azure portal.
Select the instance to open it.
b. Select Keys to view the connection strings for the SignalR Service instance.
c. Copy the primary connection string, and then run the following command:
After Azure Function is running locally, go to http://localhost:7071/api/index and you'll see the current
star count. If you star or "unstar" in the GitHub, you'll get a star count refreshing every few seconds.
NOTE
SignalR binding needs Azure Storage, but you can use local storage emulator when the Function is running locally.
If you got some error like
There was an error performing a read operation on the Blob Storage Secret Repository. Please
ensure the 'AzureWebJobsStorage' connection string is valid.
You need to download and enable Storage Emulator
Clean up resources
If you're not going to continue to use this app, delete all resources created by this quickstart with the following
steps so you don't incur any charges:
1. In the Azure portal, select Resource groups on the far left, and then select the resource group you
created. Alternatively, you may use the search box to find the resource group by its name.
2. In the window that opens, select the resource group, and then click Delete resource group .
3. In the new window, type the name of the resource group to delete, and then click Delete .
Having issues? Try the troubleshooting guide or let us know.
Next steps
In this quickstart, you built and ran a real-time serverless application in the local host. Next, learn more about
how to bi-directional communicating between clients and Azure Function with SignalR Service.
SignalR Service bindings for Azure Functions
Bi-directional communicating in Serverless
Create your first function with Java and Maven
Quickstart: Use JavaScript to create an App showing
GitHub star count with Azure Functions and SignalR
Service
8/2/2022 • 7 minutes to read • Edit Online
In this article, you'll use Azure SignalR Service, Azure Functions, and JavaScript to build a serverless application
to broadcast messages to clients.
NOTE
You can get all code mentioned in the article from GitHub.
Prerequisites
A code editor, such as Visual Studio Code.
An Azure account with an active subscription. If you don't already have an Azure account, create an account
for free.
Azure Functions Core Tools, version 2 or above. Used to run Azure Function apps locally.
Node.js, version 10.x
The examples should work with other versions of Node.js, for more information, see Azure Functions runtime
versions documentation.
This quickstart can be run on macOS, Windows, or Linux.
Resource group Create a resource group named Select or create a resource group for
SignalRTestResources your SignalR resource. It's useful to
create a new resource group for this
tutorial instead of using an existing
resource group. To free resources after
completing the tutorial, delete the
resource group.
Region Choose your region Select the appropriate region for your
new SignalR Service instance.
Pricing tier Select Change and then choose Free Azure SignalR Service has three pricing
(Dev/Test Only) . Choose Select to tiers: Free, Standard, and Premium.
confirm your choice of pricing tier. Tutorials use the Free tier, unless
noted otherwise in the prerequisites.
Ser vice mode Choose the appropriate service mode Use Default for ASP.NET. Use
for this tutorial Ser verless for Azure Functions or
REST API.
2. After you initialize a project, you need to create functions. In this sample, we'll create three functions:
a. Run the following command to create a index function, which will host a web page for clients.
{
"bindings": [
{
"authLevel": "anonymous",
"type": "httpTrigger",
"direction": "in",
"name": "req",
"methods": [
"get",
"post"
]
},
{
"type": "http",
"direction": "out",
"name": "res"
}
]
}
{
"disabled": false,
"bindings": [
{
"authLevel": "anonymous",
"type": "httpTrigger",
"direction": "in",
"methods": [
"post"
],
"name": "req",
"route": "negotiate"
},
{
"type": "http",
"direction": "out",
"name": "res"
},
{
"type": "signalRConnectionInfo",
"name": "connectionInfo",
"hubName": "serverless",
"connectionStringSetting": "AzureSignalRConnectionString",
"direction": "in"
}
]
}
c. Create a broadcast function to broadcast messages to all clients. In the sample, we use a time
trigger to broadcast messages periodically.
context.bindings.signalRMessages = [{
"target": "newMessage",
"arguments": [ `Current star count of https://github.com/Azure/azure-signalr
is: ${star}` ]
}]
context.done();
});
}).on("error", (error) => {
context.log(error);
context.res = {
status: 500,
body: error
};
context.done();
});
req.end();
}
3. The client interface of this sample is a web page. We read HTML content from content/index.html in the
index function, create a new file named index.html in the content directory under your project root
folder. Copy the following code:
<html>
<body>
<h1>Azure SignalR Serverless Sample</h1>
<div id="messages"></div>
<script src="https://cdnjs.cloudflare.com/ajax/libs/microsoft-signalr/3.1.7/signalr.min.js">
</script>
<script>
let messages = document.querySelector('#messages');
const apiBaseUrl = window.location.origin;
const connection = new signalR.HubConnectionBuilder()
.withUrl(apiBaseUrl + '/api')
.configureLogging(signalR.LogLevel.Information)
.build();
connection.on('newMessage', (message) => {
document.getElementById("messages").innerHTML = message;
});
connection.start()
.catch(console.error);
</script>
</body>
</html>
4. You're almost done now. The last step is to set a connection string of the SignalR Service to Azure
Function settings.
a. In the Azure portal, find the SignalR instance you deployed earlier by typing its name in the
Search box. Select the instance to open it.
b. Select Keys to view the connection strings for the SignalR Service instance.
c. Copy the primary connection string. And execute the command below.
func start
After Azure Function running locally. Use your browser to visit http://localhost:7071/api/index and you
can see the current star count. And if you star or "unstar" in GitHub, you'll see the star count refreshing
every few seconds.
NOTE
SignalR binding needs Azure Storage, but you can use local storage emulator when the function is running locally.
If you got an error like
There was an error performing a read operation on the Blob Storage Secret Repository. Please
ensure the 'AzureWebJobsStorage' connection string is valid.
You need to download and enable Storage Emulator
Clean up resources
If you're not going to continue to use this app, delete all resources created by this quickstart with the following
steps so you don't incur any charges:
1. In the Azure portal, select Resource groups on the far left, and then select the resource group you
created. Alternatively, you may use the search box to find the resource group by its name.
2. In the window that opens, select the resource group, and then click Delete resource group .
3. In the new window, type the name of the resource group to delete, and then click Delete .
Next steps
In this quickstart, you built and ran a real-time serverless application in localhost. Next, learn more about how to
bi-directional communicating between clients and Azure Function with SignalR Service.
SignalR Service bindings for Azure Functions
Bi-directional communicating in Serverless
Deploy Azure Functions with VS Code
Quickstart: Create a serverless app with Azure
Functions, SignalR Service, and Python
8/2/2022 • 7 minutes to read • Edit Online
Get started with Azure SignalR Service by using Azure Functions and Python to build a serverless application
that broadcasts messages to clients. You'll run the function in the local environment, connecting to an Azure
SignalR Service instance in the cloud. Completing this quickstart incurs a small cost of a few USD cents or less in
your Azure Account.
NOTE
You can get the code in this article from GitHub.
Prerequisites
This quickstart can be run on macOS, Windows, or Linux.
You'll need a code editor such as Visual Studio Code.
Install the Azure Functions Core Tools (version 2.7.1505 or higher) to run Python Azure Function apps
locally.
Azure Functions requires Python 3.6+. (See Supported Python versions.)
SignalR binding needs Azure Storage, but you can use a local storage emulator when a function is
running locally. You'll need to download and enable Storage Emulator.
If you don't have an Azure subscription, create an Azure free account before you begin.
Resource group Create a resource group named Select or create a resource group for
SignalRTestResources your SignalR resource. It's useful to
create a new resource group for this
tutorial instead of using an existing
resource group. To free resources after
completing the tutorial, delete the
resource group.
Region Choose your region Select the appropriate region for your
new SignalR Service instance.
Pricing tier Select Change and then choose Free Azure SignalR Service has three pricing
(Dev/Test Only) . Choose Select to tiers: Free, Standard, and Premium.
confirm your choice of pricing tier. Tutorials use the Free tier, unless
noted otherwise in the prerequisites.
Ser vice mode Choose the appropriate service mode Use Default for ASP.NET. Use
for this tutorial Ser verless for Azure Functions or
REST API.
2. After you initialize a project, you need to create functions. In this sample, we need to create three
functions: index , negotiate , and broadcast .
a. Run the following command to create an index function, which will host a web page for a client.
{
"bindings": [
{
"authLevel": "anonymous",
"type": "httpTrigger",
"direction": "in",
"name": "req",
"methods": [
"get",
"post"
]
},
{
"type": "http",
"direction": "out",
"name": "res"
}
]
}
import os
{
"scriptFile": "__init__.py",
"bindings": [
{
"authLevel": "anonymous",
"type": "httpTrigger",
"direction": "in",
"name": "req",
"methods": [
"post"
]
},
{
"type": "http",
"direction": "out",
"name": "$return"
},
{
"type": "signalRConnectionInfo",
"name": "connectionInfo",
"hubName": "serverless",
"connectionStringSetting": "AzureSignalRConnectionString",
"direction": "in"
}
]
}
c. Create a broadcast function to broadcast messages to all clients. In the sample, we use time
trigger to broadcast messages periodically.
import requests
import json
etag = ''
start_count = 0
if res.status_code == 200:
jres = res.json()
start_count = jres['stargazers_count']
signalRMessages.set(json.dumps({
'target': 'newMessage',
'arguments': [ 'Current star count of https://github.com/Azure/azure-signalr is: ' +
str(start_count) ]
}))
3. The client interface of this sample is a web page. We read HTML content from content/index.html in the
index function, and then create a new file index.html in the content directory under your project root
folder. Copy the following content:
<html>
<body>
<h1>Azure SignalR Serverless Sample</h1>
<div id="messages"></div>
<script src="https://cdnjs.cloudflare.com/ajax/libs/microsoft-signalr/3.1.7/signalr.min.js">
</script>
<script>
let messages = document.querySelector('#messages');
const apiBaseUrl = window.location.origin;
const connection = new signalR.HubConnectionBuilder()
.withUrl(apiBaseUrl + '/api')
.configureLogging(signalR.LogLevel.Information)
.build();
connection.on('newMessage', (message) => {
document.getElementById("messages").innerHTML = message;
});
connection.start()
.catch(console.error);
</script>
</body>
</html>
4. We're almost done now. The last step is to set a connection string of the SignalR Service to Azure
Function settings.
a. In the Azure portal, search for the SignalR Service instance you deployed earlier. Select the
instance to open it.
b. Select Keys to view the connection strings for the SignalR Service instance.
c. Copy the primary connection string, and then run the following command:
func start
After the Azure Function is running locally, go to http://localhost:7071/api/index and you'll see the
current star count. If you star or unstar in GitHub, you'll get a refreshed star count every few seconds.
NOTE
SignalR binding needs Azure Storage, but you can use a local storage emulator when the function is running
locally. You need to download and enable Storage Emulator if you got an error like
There was an error performing a read operation on the Blob Storage Secret Repository. Please
ensure the 'AzureWebJobsStorage' connection string is valid.
Clean up resources
If you're not going to continue to use this app, delete all resources created by this quickstart with the following
steps so you don't incur any charges:
1. In the Azure portal, select Resource groups on the far left, and then select the resource group you
created. Alternatively, you may use the search box to find the resource group by its name.
2. In the window that opens, select the resource group, and then click Delete resource group .
3. In the new window, type the name of the resource group to delete, and then click Delete .
Having issues? Try the troubleshooting guide or let us know.
Next steps
In this quickstart, you built and ran a real-time serverless application in local. Next, learn more about how to use
bi-directional communicating between clients and Azure Function with SignalR Service.
SignalR Service bindings for Azure Functions
Bi-directional communicating in Serverless
Deploy Azure Functions with VS Code
How to work with Event Grid triggers and bindings
in Azure Functions
8/2/2022 • 5 minutes to read • Edit Online
Azure Functions provides built-in integration with Azure Event Grid by using triggers and bindings. This article
shows you how to configure and locally evaluate your Event Grid trigger and bindings. For more information
about Event Grid trigger and output binding definitions and examples, see one of the following reference
articles:
Azure Event Grid bindings for Azure Functions
Azure Event Grid trigger for Azure Functions
Azure Event Grid output binding for Azure Functions
Create a subscription
To start receiving Event Grid HTTP requests, create an Event Grid subscription that specifies the endpoint URL
that invokes the function.
Azure portal
For functions that you develop in the Azure portal with the Event Grid trigger, select Integration then choose
the Event Grid Trigger and select Create Event Grid subscription .
When you select this link, the portal opens the Create Event Subscription page with the current trigger
endpoint already defined.
For more information about how to create subscriptions by using the Azure portal, see Create custom event -
Azure portal in the Event Grid documentation.
Azure CLI
To create a subscription by using the Azure CLI, use the az eventgrid event-subscription create command.
The command requires the endpoint URL that invokes the function, and the endpoint varies between version 1.x
of the Functions runtime and later versions. The following example shows the version-specific URL pattern:
v2.x+
v1.x
https://{functionappname}.azurewebsites.net/runtime/webhooks/eventgrid?functionName={functionname}&code=
{systemkey}
The system key is an authorization key that has to be included in the endpoint URL for an Event Grid trigger. The
following section explains how to get the system key.
Here's an example that subscribes to a blob storage account (with a placeholder for the system key):
Bash
Cmd
For more information about how to create a subscription, see the blob storage quickstart or the other Event Grid
quickstarts.
Get the system key
You can get the system key by using the following API (HTTP GET):
v2.x+
v1.x
http://{functionappname}.azurewebsites.net/admin/host/systemkeys/eventgrid_extension?code={masterkey}
This REST API is an administrator API, so it requires your function app master key. Don't confuse the system key
(for invoking an Event Grid trigger function) with the master key (for performing administrative tasks on the
function app). When you subscribe to an event grid topic, be sure to use the system key.
Here's an example of the response that provides the system key:
{
"name": "eventgridextensionconfig_extension",
"value": "{the system key for the function}",
"links": [
{
"rel": "self",
"href": "{the URL for the function, without the system key}"
}
]
}
You can get the master key for your function app from the Function app settings tab in the portal.
IMPORTANT
The master key provides administrator access to your function app. Don't share this key with third parties or distribute it
in native client applications.
For more information, see Authorization keys in the HTTP trigger reference article.
The deployment may take a few minutes to complete. After the deployment has succeeded, view your web app
to make sure it's running. In a web browser, navigate to: https://<your-site-name>.azurewebsites.net
You see the site but no events have been posted to it yet.
v2.x+
v1.x
http://localhost:7071/runtime/webhooks/eventgrid?functionName={FUNCTION_NAME}
The functionName parameter must be the name specified in the FunctionName attribute.
The following screenshots show the headers and request body in Postman:
The Event Grid trigger function executes and shows logs similar to the following example:
Next steps
To learn more about Event Grid with Functions, see the following articles:
Azure Event Grid bindings for Azure Functions
Tutorial: Automate resizing uploaded images using Event Grid
Start/Stop VMs v2 overview
8/2/2022 • 6 minutes to read • Edit Online
The Start/Stop VMs v2 feature starts or stops Azure virtual machines (VMs) across multiple subscriptions. It
starts or stops Azure VMs on user-defined schedules, provides insights through Azure Application Insights, and
send optional notifications by using action groups. The feature can manage both Azure Resource Manager VMs
and classic VMs for most scenarios.
This new version of Start/Stop VMs v2 provides a decentralized low-cost automation option for customers who
want to optimize their VM costs. It offers all of the same functionality as the original version available with Azure
Automation, but it is designed to take advantage of newer technology in Azure.
NOTE
We've added a plan (AZ - Availability Zone ) to our Start/Stop V2 solution to enable a high-availability offering. You can
now choose between Consumption and Availability Zone plans before you start your deployment. In most cases, the
monthly cost of the Availability Zone plan is higher when compared to the Consumption plan.
NOTE
Automatic updating functionality was introduced on April 28th, 2022. This new auto update feature helps you stay on the
latest version of the solution. This feature is enabled by default when you perform a new installation. If you deployed your
solution before this date, you can reinstall to the latest version from our GitHub repository
Overview
Start/Stop VMs v2 is redesigned and it doesn't depend on Azure Automation or Azure Monitor Logs, as required
by the previous version. This version relies on Azure Functions to handle the VM start and stop execution.
A managed identity is created in Azure Active Directory (Azure AD) for this Azure Functions application and
allows Start/Stop VMs v2 to easily access other Azure AD-protected resources, such as the logic apps and Azure
VMs. For more about managed identities in Azure AD, see Managed identities for Azure resources.
An HTTP trigger endpoint function is created to support the schedule and sequence scenarios included with the
feature, as shown in the following table.
For example, Scheduled HTTP trigger function is used to handle schedule and sequence scenarios. Similarly,
AutoStop HTTP trigger function handles the auto stop scenario.
The queue-based trigger functions are required in support of this feature. All timer-based triggers are used to
perform the availability test and to monitor the health of the system.
Azure Logic Apps is used to configure and manage the start and stop schedules for the VM take action by calling
the function using a JSON payload. By default, during initial deployment it creates a total of five Logic Apps for
the following scenarios:
Scheduled - Start and stop actions are based on a schedule you specify against Azure Resource Manager
and classic VMs. ststv2_vms_Scheduled_star t and ststv2_vms_Scheduled_stop configure the
scheduled start and stop.
Sequenced - Start and stop actions are based on a schedule targeting VMs with pre-defined sequencing
tags. Only two named tags are supported - sequencestar t and sequencestop .
ststv2_vms_Sequenced_star t and ststv2_vms_Sequenced_stop configure the sequenced start and
stop.
The proper way to use the sequence functionality is to create a tag named sequencestar t on each VM
you wish to be started in a sequence. The tag value needs to be an integer ranging from 1 to N for each
VM in the respective scope. The tag is optional and if not present, the VM simply won't participate in the
sequencing. The same criteria applies to stopping VMs with only the tag name being different and use
sequencestop in this case. You have to configure both the tags in each VM to get start and stop action. If
two or more VMs share the same tag value, those VMs would be started or stopped at the same time.
For example, the following table shows that both start and stop actions are processed in ascending order
by the value of the tag.
NOTE
This scenario only supports Azure Resource Manager VMs.
AutoStop - This functionality is only used for performing a stop action against both Azure Resource
Manager and classic VMs based on its CPU utilization. It can also be a scheduled-based take action, which
creates alerts on VMs and based on the condition, the alert is triggered to perform the stop action.
ststv2_vms_AutoStop configures the auto stop functionality.
Each Start/Stop action supports assignment of one or more subscriptions, resource groups, or a list of VMs.
An Azure Storage account, which is required by Functions, is also used by Start/Stop VMs v2 for two purposes:
Uses Azure Table Storage to store the execution operation metadata (that is, the start/stop VM action).
Uses Azure Queue Storage to support the Azure Functions queue-based triggers.
All telemetry data, that is trace logs from the function app execution, is sent to your connected Application
Insights instance. You can view the telemetry data stored in Application Insights from a set of pre-defined
visualizations presented in a shared Azure dashboard.
Email notifications are also sent as a result of the actions performed on the VMs.
New releases
When a new version of Start/Stop VMs v2 is released, your instance is auto-updated without having to manually
redeploy.
Prerequisites
You must have an Azure account with an active subscription. Create an account for free.
Your account has been granted the Contributor permission in the subscription.
Start/Stop VMs v2 is available in all Azure global and US Government cloud regions that are listed in
Products available by region page for Azure Functions.
Next steps
To deploy this feature, see Deploy Start/Stop VMs.
Deploy Start/Stop VMs v2
8/2/2022 • 10 minutes to read • Edit Online
Perform the steps in this topic in sequence to install the Start/Stop VMs v2 feature. After completing the setup
process, configure the schedules to customize it to your requirements.
Permissions considerations
Please keep the following in mind before and during deployment:
The solution allows those with appropriate role-based access control (RBAC) permissions on the Start/Stop
v2 deployment to add, remove, and manage schedules for virtual machines under the scope of the Start/Stop
v2. This behavior is by design. In practice, this means a user who doesn't have direct RBAC permission on a
virtual machine could still create start, stop, and autostop operations on that virtual machine when they have
the RBAC permission to modify the Start/Stop v2 solution managing it.
Any users with access to the Start/Stop v2 solution could uncover cost, savings, operation history, and other
data that is stored in the Application Insights instance used by the Start/Stop v2 application.
When managing a Start/Stop v2 solution, you should consider the permissions of users to the Start/Stop v2
solution, particularly when whey don't have permission to directly modify the target virtual machines.
Deploy feature
The deployment is initiated from the Start/Stop VMs v2 GitHub organization here. While this feature is intended
to manage all of your VMs in your subscription across all resource groups from a single deployment within the
subscription, you can install another instance of it based on the operations model or requirements of your
organization. It also can be configured to centrally manage VMs across multiple subscriptions.
To simplify management and removal, we recommend you deploy Start/Stop VMs v2 to a dedicated resource
group.
NOTE
Currently this solution does not support specifying an existing Storage account or Application Insights resource.
NOTE
The naming format for the function app and storage account has changed. To guarantee global uniqueness, a random
and unique string is now appended to the names of these resource.
1. Open your browser and navigate to the Start/Stop VMs v2 GitHub organization.
2. Select the deployment option based on the Azure cloud environment your Azure VMs are created in.
3. If prompted, sign in to the Azure portal.
4. Choose the appropriate Plan from the drop-down box. When choosing a Zone Redundant plan
(Star t/StopV2-AZ ), you must create your deployment in one of the following regions:
Australia East
Brazil South
Canada Central
Central US
East US
East US 2
France Central
Germany West Central
Japan East
North Europe
Southeast Asia
UK South
West Europe
West US 2
West US 3
5. Select Create , which opens the custom Azure Resource Manager deployment page in the Azure portal.
6. Enter the following values:
NAME VA L UE
Resource Group Name Specify the resource group name that will contain the
individual resources for Start/Stop VMs.
Resource Group Region Specify the region for the resource group. For example,
Central US.
Azure Function App Name Type a name that is valid in a URL path. The name you
type is validated to make sure that it's unique in Azure
Functions.
Application Insights Name Specify the name of your Application Insights instance
that will hold the analytics for Start/Stop VMs.
Application Insights Region Specify the region for the Application Insights instance.
Storage Account Name Specify the name of the Azure Storage account to store
Start/Stop VMs execution telemetry.
NOTE
We are collecting operation and heartbeat telemetry to better assist you if you reach the support team for any
troubleshooting. We are also collecting virtual machine event history to verify when the service acted on a virtual
machine and how long a virtual machine was snoozed in order to determine the efficacy of the service.
SET T IN G VA L UE
Role Contributor
NOTE
This scenario only supports Azure Resource Manager VMs.
AutoStop - This functionality is only used for performing a stop action against both Azure Resource
Manager and classic VMs based on its CPU utilization. It can also be a scheduled-based take action, which
creates alerts on VMs and based on the condition, the alert is triggered to perform the stop
action.ststv2_vms_AutoStop configures the auto-stop functionality.
If you need additional schedules, you can duplicate one of the Logic Apps provided using the Clone option in
the Azure portal.
{
"Action": "start",
"EnableClassic": false,
"RequestScopes": {
"ExcludedVMLists": [],
"Subscriptions": [
"/subscriptions/12345678-1234-5678-1234-123456781234/"
]
}
}
Specify multiple subscriptions in the subscriptions array with each value separated by a comma as in
the following example.
"Subscriptions": [
"/subscriptions/12345678-1234-5678-1234-123456781234/",
"/subscriptions/11111111-0000-1111-2222-444444444444/"
]
In the request body, if you want to manage VMs for specific resource groups, modify the request body as
shown in the following example. Each resource path specified must be separated by a comma. You can
specify one resource group or more if required.
This example also demonstrates excluding a virtual machine. You can exclude the VM by specifying the
VMs resource path or by wildcard.
{
"Action": "start",
"EnableClassic": false,
"RequestScopes": {
"ResourceGroups": [
"/subscriptions/12345678-1234-5678-1234-123456781234/resourceGroups/rg1/",
"/subscriptions/11111111-0000-1111-2222-444444444444/resourceGroups/rg2/"
],
"ExcludedVMLists": [
"/subscriptions/12345678-1111-2222-3333-
1234567891234/resourceGroups/vmrg1/providers/Microsoft.Compute/virtualMachines/vm1"
]
}
}
Here the action will be performed on all the VMs except on the VM name starts with Az and Bz in both
subscriptions.
{
"Action": "start",
"EnableClassic": false,
"RequestScopes": {
"ExcludedVMLists": [“Az*”,“Bz*”],
"Subscriptions": [
"/subscriptions/12345678-1234-5678-1234-123456781234/",
"/subscriptions/11111111-0000-1111-2222-444444444444/"
]
}
}
In the request body, if you want to manage a specific set of VMs within the subscription, modify the
request body as shown in the following example. Each resource path specified must be separated by a
comma. You can specify one VM if required.
{
"Action": "start",
"EnableClassic": true,
"RequestScopes": {
"ExcludedVMLists": [],
"VMLists": [
"/subscriptions/12345678-1234-5678-1234-
123456781234/resourceGroups/rg1/providers/Microsoft.Compute/virtualMachines/vm1",
"/subscriptions/12345678-1234-5678-1234-
123456781234/resourceGroups/rg3/providers/Microsoft.Compute/virtualMachines/vm2",
"/subscriptions/11111111-0000-1111-2222-
444444444444/resourceGroups/rg2/providers/Microsoft.ClassicCompute/virtualMachines/vm30"
]
}
}
4. In the designer pane, select Function-Tr y to configure the target settings. In the request body, if you
want to manage VMs across all resource groups in the subscription, modify the request body as shown in
the following example.
{
"Action": "start",
"EnableClassic": false,
"RequestScopes": {
"ExcludedVMLists": [],
"Subscriptions": [
"/subscriptions/12345678-1234-5678-1234-123456781234/"
]
},
"Sequenced": true
}
Specify multiple subscriptions in the subscriptions array with each value separated by a comma as in
the following example.
"Subscriptions": [
"/subscriptions/12345678-1234-5678-1234-123456781234/",
"/subscriptions/11111111-0000-1111-2222-444444444444/"
]
In the request body, if you want to manage VMs for specific resource groups, modify the request body as
shown in the following example. Each resource path specified must be separated by a comma. You can
specify one resource group if required.
This example also demonstrates excluding a virtual machine by its resource path compared to the
example for scheduled start/stop, which used wildcards.
{
"Action": "start",
"EnableClassic": false,
"RequestScopes": {
"ResourceGroups": [
"/subscriptions/12345678-1234-5678-1234-123456781234/resourceGroups/rg1/",
"/subscriptions/11111111-0000-1111-2222-444444444444/resourceGroups/rg2/"
],
"ExcludedVMLists": [
"/subscriptions/12345678-1111-2222-3333-
1234567891234/resourceGroups/vmrg1/providers/Microsoft.Compute/virtualMachines/vm1"
]
},
"Sequenced": true
}
In the request body, if you want to manage a specific set of VMs within a subscription, modify the request
body as shown in the following example. Each resource path specified must be separated by a comma.
You can specify one VM if required.
{
"Action": "start",
"EnableClassic": true,
"RequestScopes": {
"ExcludedVMLists": [],
"VMLists": [
"/subscriptions/12345678-1234-5678-1234-
123456781234/resourceGroups/rg1/providers/Microsoft.Compute/virtualMachines/vm1",
"/subscriptions/12345678-1234-5678-1234-
123456781234/resourceGroups/rg2/providers/Microsoft.ClassicCompute/virtualMachines/vm2",
"/subscriptions/11111111-0000-1111-2222-
444444444444/resourceGroups/rg2/providers/Microsoft.ClassicCompute/virtualMachines/vm30"
]
},
"Sequenced": true
}
4. In the designer pane, select Function-Tr y to configure the target settings. In the request body, if you
want to manage VMs across all resource groups in the subscription, modify the request body as shown in
the following example.
{
"Action": "stop",
"EnableClassic": false,
"AutoStop_MetricName": "Percentage CPU",
"AutoStop_Condition": "LessThan",
"AutoStop_Description": "Alert to stop the VM if the CPU % exceed the threshold",
"AutoStop_Frequency": "00:05:00",
"AutoStop_Severity": "2",
"AutoStop_Threshold": "5",
"AutoStop_TimeAggregationOperator": "Average",
"AutoStop_TimeWindow": "06:00:00",
"RequestScopes":{
"Subscriptions":[
"/subscriptions/12345678-1111-2222-3333-1234567891234/",
"/subscriptions/12345678-2222-4444-5555-1234567891234/"
],
"ExcludedVMLists":[]
}
}
In the request body, if you want to manage VMs for specific resource groups, modify the request body as
shown in the following example. Each resource path specified must be separated by a comma. You can
specify one resource group if required.
{
"Action": "stop",
"AutoStop_Condition": "LessThan",
"AutoStop_Description": "Alert to stop the VM if the CPU % exceed the threshold",
"AutoStop_Frequency": "00:05:00",
"AutoStop_MetricName": "Percentage CPU",
"AutoStop_Severity": "2",
"AutoStop_Threshold": "5",
"AutoStop_TimeAggregationOperator": "Average",
"AutoStop_TimeWindow": "06:00:00",
"EnableClassic": true,
"RequestScopes": {
"ExcludedVMLists": [],
"ResourceGroups": [
"/subscriptions/12345678-1111-2222-3333-1234567891234/resourceGroups/vmrg1/",
"/subscriptions/12345678-1111-2222-3333-1234567891234/resourceGroupsvmrg2/",
"/subscriptions/12345678-2222-4444-5555-1234567891234/resourceGroups/VMHostingRG/"
]
}
}
In the request body, if you want to manage a specific set of VMs within the subscription, modify the
request body as shown in the following example. Each resource path specified must be separated by a
comma. You can specify one VM if required.
{
"Action": "stop",
"AutoStop_Condition": "LessThan",
"AutoStop_Description": "Alert to stop the VM if the CPU % exceed the threshold",
"AutoStop_Frequency": "00:05:00",
"AutoStop_MetricName": "Percentage CPU",
"AutoStop_Severity": "2",
"AutoStop_Threshold": "5",
"AutoStop_TimeAggregationOperator": "Average",
"AutoStop_TimeWindow": "06:00:00",
"EnableClassic": true,
"RequestScopes": {
"ExcludedVMLists": [],
"VMLists": [
"/subscriptions/12345678-1111-2222-3333-
1234567891234/resourceGroups/rg3/providers/Microsoft.ClassicCompute/virtualMachines/Clasyvm11",
"/subscriptions/12345678-1111-2222-3333-
1234567891234/resourceGroups/vmrg1/providers/Microsoft.Compute/virtualMachines/vm1"
]
}
}
Next steps
To learn how to monitor status of your Azure VMs managed by the Start/Stop VMs v2 feature and perform other
management tasks, see the Manage Start/Stop VMs article.
How to manage Start/Stop VMs v2
8/2/2022 • 2 minutes to read • Edit Online
Azure dashboard
Start/Stop VMs v2 includes a dashboard to help you understand the management scope and recent operations
against your VMs. It is a quick and easy way to verify the status of each operation that’s performed on your
Azure VMs. The visualization in each tile is based on a Log query and to see the query, select the Open in logs
blade option in the right-hand corner of the tile. This opens the Log Analytics tool in the Azure portal, and from
here you can evaluate the query and modify to support your needs, such as custom log alerts, a custom
workbook, etc.
The log data each tile in the dashboard displays is refreshed every hour, with a manual refresh option on
demand by clicking the Refresh icon on a given visualization, or by refreshing the full dashboard.
To learn about working with a log-based dashboard, see the following tutorial.
3. On the Star tStopV2_VM_Notification page, you can modify the Email/SMS/Push/Voice notification
options. Add other actions or update your existing configuration to this action group and then click OK to
save your changes.
To learn more about action groups, see action groups
The following screenshot is an example email that is sent when the feature shuts down virtual machines.
Next steps
To handle problems during VM management, see Troubleshoot Start/Stop VMs v2 issues.
How to remove Start/Stop VMs v2
8/2/2022 • 2 minutes to read • Edit Online
After you enable the Start/Stop VMs v2 feature to manage the running state of your Azure VMs, you may decide
to stop using it. Removing this feature can be done by deleting the resource group dedicated to store the
following resources in support of Start/Stop VMs v2:
The Azure Functions applications
Schedules in Azure Logic Apps
The Application Insights instance
Azure Storage account
NOTE
If you run into problems during deployment, you encounter an issue when using Start/Stop VMs v2, or if you have a
related question, you can submit an issue on GitHub. Filing an Azure support incident from the Azure support site is not
available for this version.
NOTE
You might need to manually remove the managed identity associated with the removed Start Stop V2 function app. You
can determine whether you need to do this by going to your subscription and selecting Access Control (IAM) . From
there you can filter by Type: App Services or Function Apps . If you find a managed identity that was left over from
your removed Start Stop V2 installation, you must remove it. Leaving this managed identity could interfere with future
installations.
Next steps
To re-deploy this feature, see Deploy Start/Stop v2.
Troubleshoot common issues with Start/Stop VMs
8/2/2022 • 2 minutes to read • Edit Online
This article provides information on troubleshooting and resolving issues that may occur while attempting to
install and configure Start/Stop VMs. For general information, see Start/Stop VMs overview.
Next steps
Learn more about monitoring Azure Functions and logic apps:
Monitor Azure Functions.
How to configure monitoring for Azure Functions.
Monitor logic apps.
If you run into problems during deployment, you encounter an issue when using Start/Stop VMs v2, or if
you have a related question, you can submit an issue on GitHub. Filing an Azure support incident from
the Azure support site is also available for this version.
Use Azure Functions to connect to an Azure SQL
Database
8/2/2022 • 5 minutes to read • Edit Online
This article shows you how to use Azure Functions to create a scheduled job that connects to an Azure SQL
Database or Azure SQL Managed Instance. The function code cleans up rows in a table in the database. The new
C# function is created based on a pre-defined timer trigger template in Visual Studio 2019. To support this
scenario, you must also set a database connection string as an app setting in the function app. For Azure SQL
Managed Instance you need to enable public endpoint to be able to connect from Azure Functions. This scenario
uses a bulk operation against the database.
If this is your first experience working with C# Functions, you should read the Azure Functions C# developer
reference.
Prerequisites
Complete the steps in the article Create your first function using Visual Studio to create a local function
app that targets version 2.x or a later version of the runtime. You must also have published your project
to a function app in Azure.
This article demonstrates a Transact-SQL command that executes a bulk cleanup operation in the
SalesOrderHeader table in the AdventureWorksLT sample database. To create the AdventureWorksLT
sample database, complete the steps in the article Create a database in Azure SQL Database using the
Azure portal.
You must add a server-level firewall rule for the public IP address of the computer you use for this
quickstart. This rule is required to be able access the SQL Database instance from your local computer.
3. In Application Settings select Add setting , in New app setting name type sqldb_connection , and
select OK .
4. In the new sqldb_connection setting, paste the connection string you copied in the previous section
into the Local field and replace {your_username} and {your_password} placeholders with real values.
Select Inser t value from local to copy the updated value into the Remote field, and then select OK .
The connection strings are stored encrypted in Azure (Remote ). To prevent leaking secrets, the
local.settings.json project file (Local ) should be excluded from source control, such as by using a
.gitignore file.
using System.Data.SqlClient;
using System.Threading.Tasks;
[FunctionName("DatabaseCleanup")]
public static async Task Run([TimerTrigger("*/15 * * * * *")]TimerInfo myTimer, ILogger log)
{
// Get the connection string from app settings and use it to create a connection.
var str = Environment.GetEnvironmentVariable("sqldb_connection");
using (SqlConnection conn = new SqlConnection(str))
{
conn.Open();
var text = "UPDATE SalesLT.SalesOrderHeader " +
"SET [Status] = 5 WHERE ShipDate < GetDate();";
This function runs every 15 seconds to update the Status column based on the ship date. To learn more
about the Timer trigger, see Timer trigger for Azure Functions.
6. Press F5 to start the function app. The Azure Functions Core Tools execution window opens behind Visual
Studio.
7. At 15 seconds after startup, the function runs. Watch the output and note the number of rows updated in
the SalesOrderHeader table.
On the first execution, you should update 32 rows of data. Following runs update no data rows, unless
you make changes to the SalesOrderHeader table data so that more rows are selected by the UPDATE
statement.
If you plan to publish this function, remember to change the TimerTrigger attribute to a more reasonable cron
schedule than every 15 seconds. Also, you need to ensure that the Function Apps instance has network access to
the Azure SQL Database instance by granting access to Azure IP addresses.
Next steps
Next, learn how to use. Functions with Logic Apps to integrate with other services.
Create a function that integrates with Logic Apps
For more information about Functions, see the following articles:
Azure Functions developer reference
Programmer reference for coding functions and defining triggers and bindings.
Testing Azure Functions
Describes various tools and techniques for testing your functions.
Tutorial: Integrate Azure Functions with an Azure
virtual network by using private endpoints
8/2/2022 • 16 minutes to read • Edit Online
This tutorial shows you how to use Azure Functions to connect to resources in an Azure virtual network by using
private endpoints. You'll create a function by using a storage account that's locked behind a virtual network. The
virtual network uses a Service Bus queue trigger.
In this tutorial, you'll:
Create a function app in the Premium plan.
Create Azure resources, such as the Service Bus, storage account, and virtual network.
Lock down your storage account behind a private endpoint.
Lock down your Service Bus behind a private endpoint.
Deploy a function app that uses both the Service Bus and HTTP triggers.
Lock down your function app behind a private endpoint.
Test to see that your function app is secure inside the virtual network.
Clean up resources.
Function App name Globally unique name Name that identifies your new
function app. Valid characters are
a-z (case insensitive), 0-9 , and
- .
4. Select Next: Hosting . On the Hosting page, enter the following settings.
5. Select Next: Monitoring . On the Monitoring page, enter the following settings.
4. On the IP Addresses tab, select Add subnet . Use the following table to configure the subnet settings.
SET T IN G SUGGEST ED VA L UE DESC RIP T IO N
4. On the Resource tab, use the private endpoint settings shown in the following table.
8. Create another private endpoint for tables. On the Resources tab, use the settings shown in the
following table. For all other settings, use the same values you used to create the private endpoint for
files.
9. After the private endpoints are created, return to the Firewalls and vir tual networks section of your
storage account.
10. Ensure Selected networks is selected. It's not necessary to add an existing virtual network.
Resources in the virtual network can now communicate with the storage account using the private endpoint.
4. On the Resource tab, use the private endpoint settings shown in the following table.
11. Select Add your client IP address to give your current client IP access to the namespace.
NOTE
Allowing your client IP address is necessary to enable the Azure portal to publish messages to the queue later in
this tutorial.
Create a queue
Create the queue where your Azure Functions Service Bus trigger will get events:
1. In your Service Bus, in the menu on the left, select Queues .
2. Select Queue . For the purposes of this tutorial, provide the name queue as the name of the new queue.
3. Select Create .
1. In GitHub, go to the following sample repository. It contains a function app and two functions, an HTTP
trigger, and a Service Bus queue trigger.
https://github.com/Azure-Samples/functions-vnet-tutorial
2. At the top of the page, select Fork to create a fork of this repository in your own GitHub account or
organization.
3. In your function app, in the menu on the left, select Deployment Center . Then select Settings .
4. On the Settings tab, use the deployment settings shown in the following table.
5. Select Save .
6. Your initial deployment might take a few minutes. When your app is successfully deployed, on the Logs
tab, you see a Success (Active) status message. If necessary, refresh the page.
Congratulations! You've successfully deployed your sample function app.
7. On the Live metrics tab, you should see that your Service Bus queue trigger has fired. If it hasn't, resend
the message from Ser vice Bus Explorer .
Congratulations! You've successfully tested your function app setup with private endpoints.
Clean up resources
In the preceding steps, you created Azure resources in a resource group. If you don't expect to need these
resources in the future, you can delete them by deleting the resource group.
From the Azure portal menu or Home page, select Resource groups . Then, on the Resource groups page,
select myResourceGroup .
On the myResourceGroup page, make sure that the listed resources are the ones you want to delete.
Select Delete resource group , type myResourceGroup in the text box to confirm, and then select Delete .
Next steps
In this tutorial, you created a Premium function app, storage account, and Service Bus. You secured all of these
resources behind private endpoints.
Use the following links to learn more Azure Functions networking options and private endpoints:
Networking options in Azure Functions
Azure Functions Premium plan
Service Bus private endpoints
Azure Storage private endpoints
Expose serverless APIs from HTTP endpoints using
Azure API Management
8/2/2022 • 2 minutes to read • Edit Online
Azure Functions integrates with Azure API Management in the portal to let you expose your HTTP trigger
function endpoints as REST APIs. These APIs are described using an OpenAPI definition. This JSON (or YAML) file
contains information about what operations are available in an API. It includes details about how the request
and response data for the API should be structured. By integrating your function app, you can have API
Management generate these OpenAPI definitions.
This article shows you how to integrate your function app with API Management. This integration works for
function apps developed in any supported language. You can also import your function app from Azure API
Management.
For C# class library functions, you can also use Visual Studio to create and publish serverless API that integrate
with API Management.
3. Choose Expor t to create the API Management instance, which may take several minutes.
4. After Azure creates the instance, it enables the Enable Application Insights option on the page. Select
it to send logs to the same place as the function application.
Import functions
After the API Management instance is created, you can import your HTTP triggered function endpoints. This
example imports an endpoint named TurbineRepair.
1. In the API Management page, select Link API .
2. The Impor t Azure Functions opens with the TurbineRepair function highlighted. Choose Select to
continue.
3. In the Create from Function App page, accept the defaults, and then select Create .
2. Save the downloaded JSON file, and then open it. Review the definition.
Next steps
You can now refine the definition in API Management in the portal. You can also learn more about API
Management.
Edit the OpenAPI definition in API Management
Create serverless APIs in Visual Studio using Azure
Functions and API Management integration
(preview)
8/2/2022 • 9 minutes to read • Edit Online
REST APIs are often described using an OpenAPI definition. This file contains information about operations in an
API and how the request and response data for the API should be structured.
In this tutorial, you learn how to:
Create a serverless function project in Visual Studio
Test function APIs locally using built-in OpenAPI functionality
Publish project to a function app in Azure with API Management integration
Get the access key for the function and set it in API Management
Download the OpenAPI definition file
The serverless function you create provides an API that lets you determine whether an emergency repair on a
wind turbine is cost-effective. Because both the function app and API Management instance you create use
consumption plans, your cost for completing this tutorial is minimal.
NOTE
The OpenAPI and API Management integration featured in this article is currently in preview. This method for exposing a
serverless API is only supported for in-process C# class library functions. Isolated process C# class library functions and all
other language runtimes should instead use Azure API Management integration from the portal.
Prerequisites
Visual Studio 2022. Make sure you select the Azure development workload during installation.
An active Azure subscription, create a free account before you begin.
Function template HTTP trigger with OpenAPI This value creates a function
triggered by an HTTP request, with
the ability to generate an OpenAPI
definition file.
Use Azurite for runtime Selected You can use the emulator for local
storage account development of HTTP trigger
(AzureWebJobsStorage) functions. Because a function app in
Azure requires a storage account,
one is assigned or created when you
publish your project to Azure.
5. Select Create to create the function project and HTTP trigger function, with support for OpenAPI.
Visual Studio creates a project and class named Function1 that contains boilerplate code for the HTTP trigger
function type. Next, you replace this function template code with your own customized code.
The function then calculates how much a repair costs, and how much revenue the turbine could make in a 24-
hour period. Parameters are supplied either in the query string or in the payload of a POST request.
In the Function1.cs project file, replace the contents of the generated class library code with the following code:
using System;
using System.IO;
using System.Net;
using System.Threading.Tasks;
using Microsoft.AspNetCore.Http;
using Microsoft.AspNetCore.Mvc;
using Microsoft.Azure.WebJobs;
using Microsoft.Azure.WebJobs.Extensions.Http;
using Microsoft.Azure.WebJobs.Extensions.OpenApi.Core.Attributes;
using Microsoft.Azure.WebJobs.Extensions.OpenApi.Core.Enums;
using Microsoft.Extensions.Logging;
using Microsoft.OpenApi.Models;
using Newtonsoft.Json;
namespace TurbineRepair
{
public static class Turbines
{
const double revenuePerkW = 0.12;
const double technicianCost = 250;
const double turbineCost = 100;
[FunctionName("TurbineRepair")]
[OpenApiOperation(operationId: "Run")]
[OpenApiSecurity("function_key", SecuritySchemeType.ApiKey, Name = "code", In =
OpenApiSecurityLocationType.Query)]
[OpenApiRequestBody("application/json", typeof(RequestBodyModel),
Description = "JSON request body containing { hours, capacity}")]
[OpenApiResponseWithBody(statusCode: HttpStatusCode.OK, contentType: "application/json", bodyType:
typeof(string),
Description = "The OK response message containing a JSON result.")]
public static async Task<IActionResult> Run(
[HttpTrigger(AuthorizationLevel.Function, "post", Route = null)] HttpRequest req,
ILogger log)
{
// Get request body data.
string requestBody = await new StreamReader(req.Body).ReadToEndAsync();
dynamic data = JsonConvert.DeserializeObject(requestBody);
int? capacity = data?.capacity;
int? hours = data?.hours;
This function code returns a message of Yes or No to indicate whether an emergency repair is cost-effective. It
also returns the revenue opportunity that the turbine represents and the cost to fix the turbine.
3. Select POST > Tr y it out , enter values for hours and capacity either as query parameters or in the
JSON request body, and select Execute .
4. When you enter integer values like 6 for hours and 2500 for capacity , you get a JSON response that
looks like the following example:
Now you have a function that determines the cost-effectiveness of emergency repairs. Next, you publish your
project and API definitions to Azure.
4. Create a new instance using the values specified in the following table:
Resource group Name of your resource group The resource group in which to
create your function app. Select an
existing resource group from the
drop-down list or choose New to
create a new resource group.
5. Select Create to create a function app and its related resources in Azure. Status of resource creation is
shown in the lower left of the window.
6. Back in Functions instance , make sure that Run from package file is checked. Your function app is
deployed using Zip Deploy with Run-From-Package mode enabled. This deployment method is
recommended for your functions project, since it results in better performance.
7. Select Next , and in API Management page, also choose + Create an API Management API .
8. Create an API in API Management by using values in the following table:
SET T IN G VA L UE DESC RIP T IO N
Resource group Name of your resource group Select the same resource group as
your function app from the drop-
down list.
API Management ser vice New instance Select New to create a new API
Management instance in the
serverless tier.
9. Select Create to create the API Management instance with the TurbineRepair API from the function
integration.
10. select Finish , verify the Publish page says Ready to publish , and then select Publish to deploy the
package containing your project files to your new function app in Azure.
After the deployment completes, the root URL of the function app in Azure is shown in the Publish tab.
Get the function access key
1. In the Publish tab, select the ellipses (...) next to Hosting and select Open in Azure por tal . The
function app you created is opened in the Azure portal in your default browser.
2. Under Functions , select Functions > TurbineRepair then select Function keys .
3. Under Function keys , select default and copy the value . You can now set this key in API Management
so that it can access the function endpoint.
3. Below Inbound processing , in Set quer y parameters , type code for Name , select +Value , paste in
the copied function key, and select Save . API Management includes the function key when it passes calls
through to the function endpoint.
Now that the function key is set, you can call the API to verify that it works when hosted in Azure.
{
"hours": "6",
"capacity": "2500"
}
As before, you can also provide the same values as query parameters.
2. Select Send , and then view the HTTP response to verify the same results are returned from the API.
Clean up resources
In the preceding steps, you created Azure resources in a resource group. If you don't expect to need these
resources in the future, you can delete them by deleting the resource group.
From the Azure portal menu or Home page, select Resource groups . Then, on the Resource groups page,
select the group you created.
On the myResourceGroup page, make sure that the listed resources are the ones you want to delete.
Select Delete resource group , type the name of your group in the text box to confirm, and then select Delete .
Next steps
You've used Visual Studio 2022 to create a function that is self-documenting because of the OpenAPI Extension
and integrated with API Management. You can now refine the definition in API Management in the portal. You
can also learn more about API Management.
Edit the OpenAPI definition in API Management
How to use managed identities for App Service and
Azure Functions
8/2/2022 • 12 minutes to read • Edit Online
This article shows you how to create a managed identity for App Service and Azure Functions applications and
how to use it to access other resources.
IMPORTANT
Managed identities for App Service and Azure Functions won't behave as expected if your app is migrated across
subscriptions/tenants. The app needs to obtain a new identity, which is done by disabling and re-enabling the feature.
Downstream resources also need to have access policies updated to use the new identity.
NOTE
Managed identities are not available for apps deployed in Azure Arc.
A managed identity from Azure Active Directory (Azure AD) allows your app to easily access other Azure AD-
protected resources such as Azure Key Vault. The identity is managed by the Azure platform and does not
require you to provision or rotate any secrets. For more about managed identities in Azure AD, see Managed
identities for Azure resources.
Your application can be granted two types of identities:
A system-assigned identity is tied to your application and is deleted if your app is deleted. An app can
only have one system-assigned identity.
A user-assigned identity is a standalone Azure resource that can be assigned to your app. An app can have
multiple user-assigned identities.
1. In the left navigation of your app's page, scroll down to the Settings group.
2. Select Identity .
3. Within the System assigned tab, switch Status to On . Click Save .
NOTE
To find the managed identity for your web app or slot app in the Azure portal, under Enterprise applications , look in
the User settings section. Usually, the slot name is similar to <app name>/slots/<slot name> .
Azure portal
Azure CLI
Azure PowerShell
ARM template
IMPORTANT
The back-end services for managed identities maintain a cache per resource URI for around 24 hours. If you update the
access policy of a particular target resource and immediately retrieve a token for that resource, you may continue to get a
cached token with outdated permissions until that token expires. There's currently no way to force a token refresh.
HTTP GET
.NET
JavaScript
Python
Java
PowerShell
{
"access_token": "eyJ0eXAi…",
"expires_on": "1586984735",
"resource": "https://vault.azure.net",
"token_type": "Bearer",
"client_id": "5E29463D-71DA-4FE0-8E69-999B57DB23B0"
}
This response is the same as the response for the Azure AD service-to-service access token request. To access
Key Vault, you will then add the value of access_token to a client connection with the vault.
For more information on the REST endpoint, see REST endpoint reference.
Remove an identity
When you remove a system-assigned identity, it's deleted from Azure Active Directory. System-assigned
identities are also automatically removed from Azure Active Directory when you delete the app resource itself.
Azure portal
Azure CLI
Azure PowerShell
ARM template
1. In the left navigation of your app's page, scroll down to the Settings group.
2. Select Identity . Then follow the steps based on the identity type:
System-assigned identity : Within the System assigned tab, switch Status to Off . Click Save .
User-assigned identity : Click the User assigned tab, select the checkbox for the identity, and click
Remove . Click Yes to confirm.
NOTE
There is also an application setting that can be set, WEBSITE_DISABLE_MSI, which just disables the local token service.
However, it leaves the identity in place, and tooling will still show the managed identity as "on" or "enabled." As a result,
use of this setting is not recommended.
PA RA M ET ER N A M E IN DESC RIP T IO N
PA RA M ET ER N A M E IN DESC RIP T IO N
IMPORTANT
If you are attempting to obtain tokens for user-assigned identities, you must include one of the optional properties.
Otherwise the token service will attempt to obtain a token for a system-assigned identity, which may or may not exist.
Next steps
Tutorial: Connect to SQL Database from App Service without secrets using a managed identity
Access Azure Storage securely using a managed identity
Call Microsoft Graph securely using a managed identity
Connect securely to services with Key Vault secrets
Customize an HTTP endpoint in Azure Functions
8/2/2022 • 6 minutes to read • Edit Online
In this article, you learn how Azure Functions allows you to build highly scalable APIs. Azure Functions comes
with a collection of built-in HTTP triggers and bindings, which make it easy to author an endpoint in a variety of
languages, including Node.js, C#, and more. In this article, you'll customize an HTTP trigger to handle specific
actions in your API design. You'll also prepare for growing your API by integrating it with Azure Functions
Proxies and setting up mock APIs. These tasks are accomplished on top of the Functions serverless compute
environment, so you don't have to worry about scaling resources - you can just focus on your API logic.
Prerequisites
This topic uses as its starting point the resources created in Create your first function from the Azure portal. If
you haven't already done so, please complete these steps now to create your function app.
The resulting function will be used for the rest of this article.
Sign in to Azure
Sign in to the Azure portal with your Azure account.
You didn't include the /api base path prefix in the route template, because it's handled by a global
setting.
3. Select Save .
For more information about customizing HTTP functions, see Azure Functions HTTP bindings.
Test your API
Next, test your function to see how it works with the new API surface:
1. On the function page, select Code + Test from the left menu.
2. Select Get function URL from the top menu and copy the URL. Confirm that it now uses the
/api/hello path.
3. Copy the URL into a new browser tab or your preferred REST client.
Browsers use GET by default.
4. Add parameters to the query string in your URL.
For example, /api/hello/?name=John .
5. Press Enter to confirm that it's working. You should see the response, "Hello John."
6. You can also try calling the endpoint with another HTTP method to confirm that the function isn't
executed. To do so, use a REST client, such as cURL, Postman, or Fiddler.
Proxies overview
In the next section, you'll surface your API through a proxy. Azure Functions Proxies allows you to forward
requests to other resources. You define an HTTP endpoint just like with HTTP trigger. However, instead of writing
code to execute when that endpoint is called, you provide a URL to a remote implementation. Doing so allows
you to compose multiple API sources into a single API surface, which is easy for clients to consume, which is
useful if you wish to build your API as microservices.
A proxy can point to any HTTP resource, such as:
Azure Functions
API apps in Azure App Service
Docker containers in App Service on Linux
Any other hosted API
To learn more about proxies, see Working with Azure Functions Proxies.
NOTE
Proxies is available in Azure Functions versions 1.x to 3.x.
NOTE
App settings are recommended for the host configuration to prevent a hard-coded environment dependency for
the proxy. Using app settings means that you can move the proxy configuration between environments, and the
environment-specific app settings will be applied.
4. Select Save .
Creating a proxy on the frontend
1. Navigate back to your front-end function app in the portal.
2. In the left-hand menu, select Proxies , and then select Add .
3. On the New Proxy page, use the settings in the following table, and then select Create .
Next, you'll add your mock API. Replace your proxies.json file with the following code:
{
"$schema": "http://json.schemastore.org/proxies",
"proxies": {
"HelloProxy": {
"matchCondition": {
"route": "/api/remotehello"
},
"backendUri": "https://%HELLO_HOST%/api/hello"
},
"GetUserByName" : {
"matchCondition": {
"methods": [ "GET" ],
"route": "/api/users/{username}"
},
"responseOverrides": {
"response.statusCode": "200",
"response.headers.Content-Type" : "application/json",
"response.body": {
"name": "{username}",
"description": "Awesome developer and master of serverless APIs",
"skills": [
"Serverless",
"APIs",
"Azure",
"Cloud"
]
}
}
}
}
}
This code adds a new proxy, GetUserByName , without the backendUri property. Instead of calling another
resource, it modifies the default response from Proxies using a response override. Request and response
overrides can also be used in conjunction with a backend URL. This technique is particularly useful when
proxying to a legacy system, where you might need to modify headers, query parameters, and so on. To learn
more about request and response overrides, see Modifying requests and responses in Proxies.
Test your mock API by calling the <YourProxyApp>.azurewebsites.net/api/users/{username} endpoint using a
browser or your favorite REST client. Be sure to replace {username} with a string value representing a username.
Next steps
In this article, you learned how to build and customize an API on Azure Functions. You also learned how to bring
multiple APIs, including mocks, together as a unified API surface. You can use these techniques to build out APIs
of any complexity, all while running on the serverless compute model provided by Azure Functions.
The following references may be helpful as you develop your API further:
Azure Functions HTTP bindings
Working with Azure Functions Proxies
Documenting an Azure Functions API (preview)
Managing hybrid environments with PowerShell in
Azure Functions and App Service Hybrid
Connections
8/2/2022 • 7 minutes to read • Edit Online
The Azure App Service Hybrid Connections feature enables access to resources in other networks. You can learn
more about this capability in the Hybrid Connections documentation. This article describes how to use this
capability to run PowerShell functions that target an on-premises server. This server can then be used to
manage all resources in the on-premises environment from an Azure PowerShell function.
# Create firewall rule for WinRM. The default HTTPS port is 5986.
New-NetFirewallRule -Name "WinRM HTTPS" `
-DisplayName "WinRM HTTPS" `
-Enabled True `
-Profile "Any" `
-Action "Allow" `
-Direction "Inbound" `
-LocalPort 5986 `
-Protocol "TCP"
Function App name Globally unique name Name that identifies your new
function app. Valid characters are
a-z (case insensitive), 0-9 , and
- .
4. Select Next : Hosting . On the Hosting page, enter the following settings.
SET T IN G SUGGEST ED VA L UE DESC RIP T IO N
Plan type App ser vice plan Choose App ser vice plan . When
you run in an App Service plan, you
must manage the scaling of your
function app.
5. Select Next : Monitoring . On the Monitoring page, enter the following settings.
SET T IN G SUGGEST ED VA L UE DESC RIP T IO N
4. Enter information about the hybrid connection as shown right after the following screenshot. You have
the option of making the Endpoint Host setting match the host name of the on-premises server to
make it easier to remember the server later when you're running remote commands. The port matches
the default Windows remote management service port that was defined on the server earlier.
SET T IN G SUGGEST ED VA L UE
Name contosopowershellhybrid
2. Copy the .msi file from your local computer to the on-premises server.
3. Run the Hybrid Connection Manager installer to install the service on the on-premises server.
4. From the portal, open the hybrid connection and then copy the gateway connection string to the
clipboard.
Restart-Service HybridConnectionManager
$Script = {
Param(
[Parameter(Mandatory=$True)]
[String] $Service
)
Get-Service $Service
}
2. Select Save .
3. Select Test , and then select Run to test the function. Review the logs to verify that the test was successful.
Managing other systems on-premises
You can use the connected on-premises server to connect to other servers and management systems in the
local environment. This lets you manage your datacenter operations from Azure by using your PowerShell
functions. The following script registers a PowerShell configuration session that runs under the provided
credentials. These credentials must be for an administrator on the remote servers. You can then use this
configuration to access other endpoints on the local server or datacenter.
# Input bindings are passed in via param block.
param($Request, $TriggerMetadata)
# The remote server that will be connected to run remote PowerShell commands on
$RemoteServer = "finance2".
Write-Output "Use hybrid connection server as a jump box to connect to a remote machine"
# We are registering an endpoint that runs under credentials ($Credential) that has access to the remote
server.
$SessionName = "HybridSession"
$ScriptCommand = {
param (
[Parameter(Mandatory=$True)]
$SessionName)
# Script to run on the jump box to run against the second machine.
$RemoteScriptCommand = {
param (
[Parameter(Mandatory=$True)]
$ComputerName)
# Write out the hostname of the hybrid connection server.
hostname
# Write out the hostname of the remote server.
Invoke-Command -ComputerName $ComputerName -Credential $Using:Credential -ScriptBlock {hostname} `
-UseSSL -Port 5986 -SessionOption (New-PSSessionOption -SkipCACheck)
}
Write-Output "Running command against remote machine via jumpbox by connecting to the PowerShell
configuration session"
Invoke-Command -ComputerName $HybridEndpoint `
-Credential $Credential `
-Port 5986 `
-UseSSL `
-ScriptBlock $RemoteScriptCommand `
-ArgumentList $RemoteServer `
-SessionOption (New-PSSessionOption -SkipCACheck) `
-ConfigurationName $SessionName
Replace the following variables in this script with the applicable values from your environment:
$HybridEndpoint
$RemoteServer
In the two preceding scenarios, you can connect and manage your on-premises environments by using
PowerShell in Azure Functions and Hybrid Connections. We encourage you to learn more about Hybrid
Connections and PowerShell in functions.
You can also use Azure virtual networks to connect to your on-premises environment through Azure Functions.
Next steps
Learn more about working with PowerShell functions
Troubleshoot error: "Azure Functions Runtime is
unreachable"
8/2/2022 • 4 minutes to read • Edit Online
This article helps you troubleshoot the following error string that appears in the Azure portal:
"Error: Azure Functions Runtime is unreachable. Click here for details on storage configuration."
This issue occurs when the Functions runtime can't start. The most common reason for this is that the function
app has lost access to its storage account. For more information, see Storage account requirements.
The rest of this article helps you troubleshoot specific causes of this error, including how to identify and resolve
each case.
For more information, see App settings reference for Azure Functions.
Guidance
Don't check slot setting for any of these settings. If you swap deployment slots, the function app breaks.
Don't modify these settings as part of automated deployments.
These settings must be provided and valid at creation time. An automated deployment that doesn't contain
these settings results in a function app that won't run, even if the settings are added later.
"The Function App has reached daily usage quota and has been stopped until the next 24 hours time frame."
To resolve this issue, remove or increase the daily quota, and then restart your app. Otherwise, the execution of
your app is blocked until the next day.
2. Download the Docker logs ZIP file and review them locally, or review the docker logs from within Kudu.
3. Check for any errors in the logs that would indicate that the container is unable to start successfully.
Any such error would need to be remedied for the function to work correctly.
When the container image can't be found, you should see a manifest unknown error in the Docker logs. In this
case, you can use the Azure CLI commands documented at How to target Azure Functions runtime versions to
change the container image being reference. If you've deployed a custom container image, you need to fix the
image and redeploy the updated version to the referenced registry.
Next steps
Learn about monitoring your function apps:
Monitor Azure Functions
Troubleshoot Python errors in Azure Functions
8/2/2022 • 9 minutes to read • Edit Online
Troubleshoot ModuleNotFoundError
This section helps you troubleshoot module-related errors in your Python function app. These errors typically
result in the following Azure Functions error message:
This error occurs when a Python function app fails to load a Python module. The root cause for this error is one
of the following issues:
The package can't be found
The package isn't resolved with proper Linux wheel
The package is incompatible with the Python interpreter version
The package conflicts with other packages
The package only supports Windows or macOS platforms
View project files
To identify the actual cause of your issue, you need to get the Python project files that run on your function app.
If you don't have the project files on your local computer, you can get them in one of the following ways:
If the function app has WEBSITE_RUN_FROM_PACKAGE app setting and its value is a URL, download the file by
copy and paste the URL into your browser.
If the function app has WEBSITE_RUN_FROM_PACKAGE and it is set to 1 , navigate to
https://<app-name>.scm.azurewebsites.net/api/vfs/data/SitePackages and download the file from the latest
href URL.
If the function app doesn't have the app setting mentioned above, navigate to
https://<app-name>.scm.azurewebsites.net/api/settings and find the URL under SCM_RUN_FROM_PACKAGE .
Download the file by copy and paste the URL into your browser.
If none of these works for you, navigate to https://<app-name>.scm.azurewebsites.net/DebugConsole and
reveal the content under /home/site/wwwroot .
The rest of this article helps you troubleshoot potential causes of this error by inspecting your function app's
content, identifying the root cause, and resolving the specific issue.
Diagnose ModuleNotFoundError
This section details potential root causes of module-related errors. After you figure out which is the likely root
cause, you can go to the related mitigation.
The package can't be found
Browse to .python_packages/lib/python3.6/site-packages/<package-name> or
.python_packages/lib/site-packages/<package-name> . If the file path doesn't exist, this missing path is likely the
root cause.
Using third-party or outdated tools during deployment may cause this issue.
See Enable remote build or Build native dependencies for mitigation.
The package isn't resolved with proper Linux wheel
Go to .python_packages/lib/python3.6/site-packages/<package-name>-<version>-dist-info or
.python_packages/lib/site-packages/<package-name>-<version>-dist-info . Use your favorite text editor to open
the wheel file and check the Tag: section. If the value of the tag doesn't contain linux , this could be the issue.
Python functions run only on Linux in Azure: Functions runtime v2.x runs on Debian Stretch and the v3.x runtime
on Debian Buster. The artifact is expected to contain the correct Linux binaries. Using --build local flag in Core
Tools, third-party, or outdated tools may cause older binaries to be used.
See Enable remote build or Build native dependencies for mitigation.
The package is incompatible with the Python interpreter version
Go to .python_packages/lib/python3.6/site-packages/<package-name>-<version>-dist-info or
.python_packages/lib/site-packages/<package-name>-<version>-dist-info . Using a text editor, open the METADATA
file and check the Classifiers: section. If the section doesn't contains Python :: 3 , Python :: 3.6 ,
Python :: 3.7 , Python :: 3.8 , or Python :: 3.9 , this means the package version is either too old, or most
likely, the package is already out of maintenance.
You can check the Python version of your function app from the Azure portal. Navigate to your function app,
choose Resource explorer , and select Go .
After the explorer loads, search for LinuxFxVersion , which shows the Python version.
See Update your package to the latest version or Replace the package with equivalents for mitigation.
The package conflicts with other packages
If you have verified that the package is resolved correctly with the proper Linux wheels, there may be a conflict
with other packages. In certain packages, the PyPi documentations may clarify the incompatible modules. For
example in azure 4.0.0 , there's a statement as follows:
See Update your package to the latest version or Replace the package with equivalents for mitigation.
The package only supports Windows or macOS platforms
Open the requirements.txt with a text editor and check the package in
https://pypi.org/project/<package-name> . Some packages only run on Windows or macOS platforms. For
example, pywin32 only runs on Windows.
The Module Not Found error may not occur when you're using Windows or macOS for local development.
However, the package fails to import on Azure Functions, which uses Linux at runtime. This is likely to be caused
by using pip freeze to export virtual environment into requirements.txt from your Windows or macOS
machine during project initialization.
See Replace the package with equivalents or Handcraft requirements.txt for mitigation.
Mitigate ModuleNotFoundError
The following are potential mitigations for module-related issues. Use the diagnoses above to determine which
of these mitigations to try.
Enable remote build
Make sure that remote build is enabled. The way that you do this depends on your deployment method.
Visual Studio Code
Azure Functions Core Tools
Manual publishing
Make sure that the latest version of the Azure Functions extension for Visual Studio Code is installed. Verify that
.vscode/settings.json exists and it contains the setting "azureFunctions.scmDoBuildDuringDeployment": true . If
not, please create this file with the azureFunctions.scmDoBuildDuringDeployment setting enabled and redeploy the
project.
Build native dependencies
Make sure that the latest version of both docker and Azure Functions Core Tools is installed. Go to your local
function project folder, and use func azure functionapp publish <app-name> --build-native-deps for deployment.
Update your package to the latest version
Browse the latest package version in https://pypi.org/project/<package-name> and check the Classifiers:
section. The package should be OS Independent , or compatible with POSIX or POSIX :: Linux in Operating
System . Also, the Programming Language should contains Python :: 3 , Python :: 3.6 , Python :: 3.7 ,
Python :: 3.8 , or Python :: 3.9 .
If these are correct, you can update the package to the latest version by changing the line
<package-name>~=<latest-version> in requirements.txt.
Handcraft requirements.txt
Some developers use pip freeze > requirements.txt to generate the list of Python packages for their
developing environments. Although this convenience should work in most cases, there can be issues in cross-
platform deployment scenarios, such as developing functions locally on Windows or macOS, but publishing to a
function app, which runs on Linux. In this scenario, pip freeze can introduce unexpected operating system-
specific dependencies or dependencies for your local development environment. These dependencies can break
the Python function app when running on Linux.
The best practice is to check the import statement from each .py file in your project source code and only check-
in those modules in requirements.txt file. This guarantees the resolution of packages can be handled properly on
different operating systems.
Replace the package with equivalents
First, we should take a look into the latest version of the package in https://pypi.org/project/<package-name> .
Usually, this package has their own GitHub page, go to the Issues section on GitHub and search if your issue
has been fixed. If so, update the package to the latest version.
Sometimes, the package may have been integrated into Python Standard Library (such as pathlib). If so, since we
provide a certain Python distribution in Azure Functions (Python 3.6, Python 3.7, Python 3.8, and Python 3.9),
the package in your requirements.txt should be removed.
However, if you're facing an issue that it has not been fixed and you're on a deadline. I encourage you to do some
research and find a similar package for your project. Usually, the Python community will provide you with a
wide variety of similar libraries that you can use.
This error occurs when a Python function app fails to start with a proper Python interpreter. The root cause for
this error is one of the following issues:
The Python interpreter mismatches OS architecture
The Python interpreter is not supported by Azure Functions Python Worker
Diagnose 'cygrpc' reference error
The Python interpreter mismatches OS architecture
This is most likely caused by a 32-bit Python interpreter is installed on your 64-bit operating system.
If you're running on an x64 operating system, please ensure your Python 3.6, 3.7, 3.8, or 3.9 interpreter is also
on 64-bit version.
You can check your Python interpreter bitness by the following commands:
On Windows in PowerShell: py -c 'import platform; print(platform.architecture()[0])'
If there's a mismatch between Python interpreter bitness and operating system architecture, please download a
proper Python interpreter from Python Software Foundation.
The Python interpreter is not supported by Azure Functions Python Worker
The Azure Functions Python Worker only supports Python 3.6, 3.7, 3.8, and 3.9. Please check if your Python
interpreter matches our expected version by py --version in Windows or python3 --version in Unix-like
systems. Ensure the return result is Python 3.6.x, Python 3.7.x, Python 3.8.x, or Python 3.9.x.
If your Python interpreter version does not meet our expectation, please download the Python 3.6, 3.7, 3.8, or
3.9 interpreter from Python Software Foundation.
This error occurs when a Python function app is forced to terminate by the operating system with a SIGKILL
signal. This signal usually indicates an out-of-memory error in your Python process. The Azure Functions
platform has a service limitation which will terminate any function apps that exceeded this limit.
Please visit the tutorial section in memory profiling on Python functions to analyze the memory bottleneck in
your function app.
This error occurs when a Python function app is forced to terminate by the operating system with a SIGSEGV
signal. This signal indicates a memory segmentation violation which can be caused by unexpectedly reading
from or writing into a restricted memory region. In the following sections, we provide a list of common root
causes.
A regression from third-party packages
In your function app's requirements.txt, an unpinned package will be upgraded to the latest version in every
Azure Functions deployment. Vendors of these packages may introduce regressions in their latest release. To
recover from this issue, try commenting out the import statements, disabling the package references, or pinning
the package to a previous version in requirements.txt.
Unpickling from a malformed .pkl file
If your function app is using the Python pickel library to load Python object from .pkl file, it is possible that the
.pkl contains malformed bytes string, or invalid address reference in it. To recover from this issue, try
commenting out the pickle.load() function.
Pyodbc connection collision
If your function app is using the popular ODBC database driver pyodbc, it is possible that multiple connections
are opened within a single function app. To avoid this issue, please use the singleton pattern and ensure only
one pyodbc connection is used across the function app.
Next steps
If you're unable to resolve your issue, please report this to the Functions team:
Report an unresolved issue
Improve throughput performance of Python apps in
Azure Functions
8/2/2022 • 7 minutes to read • Edit Online
When developing for Azure Functions using Python, you need to understand how your functions perform and
how that performance affects the way your function app gets scaled. The need is more important when
designing highly performant apps. The main factors to consider when designing, writing, and configuring your
functions apps are horizontal scaling and throughput performance configurations.
Horizontal scaling
By default, Azure Functions automatically monitors the load on your application and creates additional host
instances for Python as needed. Azure Functions uses built-in thresholds for different trigger types to decide
when to add instances, such as the age of messages and queue size for QueueTrigger. These thresholds aren't
user configurable. For more information, see Event-driven scaling in Azure Functions.
W O RK LO A D T Y P E F UN C T IO N A P P C H A RA C T ERIST IC S EXA M P L ES
As real world function workloads are usually a mix of I/O and CPU bound, you should profile the app under
realistic production loads.
Performance -specific configurations
After understanding the workload profile of your function app, the following are configurations that you can use
to improve the throughput performance of your functions.
Async
Multiple language worker
Max workers within a language worker process
Event loop
Vertical Scaling
Async
Because Python is a single-threaded runtime, a host instance for Python can process only one function
invocation at a time by default. For applications that process a large number of I/O events and/or is I/O bound,
you can improve performance significantly by running functions asynchronously.
To run a function asynchronously, use the async def statement, which runs the function with asyncio directly:
Here is an example of a function with HTTP trigger that uses aiohttp http client:
import aiohttp
A function without the async keyword is run automatically in an ThreadPoolExecutor thread pool:
def main():
some_blocking_socket_io()
In order to achieve the full benefit of running functions asynchronously, the I/O operation/library that is used in
your code needs to have async implemented as well. Using synchronous I/O operations in functions that are
defined as asynchronous may hur t the overall performance. If the libraries you are using do not have async
version implemented, you may still benefit from running your code asynchronously by managing event loop in
your app.
Here are a few examples of client libraries that has implemented async pattern:
aiohttp - Http client/server for asyncio
Streams API - High-level async/await-ready primitives to work with network connection
Janus Queue - Thread-safe asyncio-aware queue for Python
pyzmq - Python bindings for ZeroMQ
U n d e r st a n d i n g a sy n c i n P y t h o n w o r k e r
When you define async in front of a function signature, Python will mark the function as a coroutine. When
calling the coroutine, it can be scheduled as a task into an event loop. When you call await in an async function,
it registers a continuation into the event loop and allow event loop to process next task during the wait time.
In our Python Worker, the worker shares the event loop with the customer's async function and it is capable for
handling multiple requests concurrently. We strongly encourage our customers to make use of asyncio
compatible libraries (e.g. aiohttp, pyzmq). Employing these recommendations will greatly increase your
function's throughput compared to those libraries implemented in synchronous fashion.
NOTE
If your function is declared as async without any await inside its implementation, the performance of your function
will be severely impacted since the event loop will be blocked which prohibit the Python worker to handle concurrent
requests.
Use multiple language worker processes
By default, every Functions host instance has a single language worker process. You can increase the number of
worker processes per host (up to 10) by using the FUNCTIONS_WORKER_PROCESS_COUNT application setting.
Azure Functions then tries to evenly distribute simultaneous function invocations across these workers.
For CPU bound apps, you should set the number of language worker to be the same as or higher than the
number of cores that are available per function app. To learn more, see Available instance SKUs.
I/O-bound apps may also benefit from increasing the number of worker processes beyond the number of cores
available. Keep in mind that setting the number of workers too high can impact overall performance due to the
increased number of required context switches.
The FUNCTIONS_WORKER_PROCESS_COUNT applies to each host that Functions creates when scaling out your
application to meet demand.
Set up max workers within a language worker process
As mentioned in the async section, the Python language worker treats functions and coroutines differently. A
coroutine is run within the same event loop that the language worker runs on. On the other hand, a function
invocation is run within a ThreadPoolExecutor, that is maintained by the language worker, as a thread.
You can set the value of maximum workers allowed for running sync functions using the
PYTHON_THREADPOOL_THREAD_COUNT application setting. This value sets the max_worker argument of the
ThreadPoolExecutor object, which lets Python use a pool of at most max_worker threads to execute calls
asynchronously. The PYTHON_THREADPOOL_THREAD_COUNT applies to each worker that Functions host creates, and
Python decides when to create a new thread or reuse the existing idle thread. For older Python versions(that is,
3.8 , 3.7 , and 3.6 ), max_worker value is set to 1. For Python version 3.9 , max_worker is set to None .
For CPU-bound apps, you should keep the setting to a low number, starting from 1 and increasing as you
experiment with your workload. This suggestion is to reduce the time spent on context switches and allowing
CPU-bound tasks to finish.
For I/O-bound apps, you should see substantial gains by increasing the number of threads working on each
invocation. the recommendation is to start with the Python default - the number of cores + 4 and then tweak
based on the throughput values you are seeing.
For mix workloads apps, you should balance both FUNCTIONS_WORKER_PROCESS_COUNT and
PYTHON_THREADPOOL_THREAD_COUNT configurations to maximize the throughput. To understand what your function
apps spend the most time on, we recommend to profile them and set the values according to the behavior they
present. Also refer to this section to learn about FUNCTIONS_WORKER_PROCESS_COUNT application settings.
NOTE
Although these recommendations apply to both HTTP and non-HTTP triggered functions, you might need to adjust other
trigger specific configurations for non-HTTP triggered functions to get the expected performance from your function apps.
For more information about this, please refer to this article.
eventloop = asyncio.get_event_loop()
return func.HttpResponse(body=json.dumps(status_codes),
mimetype='application/json')
Vertical scaling
For more processing units especially in CPU-bound operation, you might be able to get this by upgrading to
premium plan with higher specifications. With higher processing units, you can adjust the number of worker
process count according to the number of cores available and achieve higher degree of parallelism.
Next steps
For more information about Azure Functions Python development, see the following resources:
Azure Functions Python developer guide
Best practices for Azure Functions
Azure Functions developer reference
Profile Python apps memory usage in Azure
Functions
8/2/2022 • 4 minutes to read • Edit Online
During development or after deploying your local Python function app project to Azure, it's a good practice to
analyze for potential memory bottlenecks in your functions. Such bottlenecks can decrease the performance of
your functions and lead to errors. The following instruction show you how to use the memory-profiler Python
package, which provides line-by-line memory consumption analysis of your functions as they execute.
NOTE
Memory profiling is intended only for memory footprint analysis on development environment. Please do not apply the
memory profiler on production function apps.
Prerequisites
Before you start developing a Python function app, you must meet these requirements:
Python 3.6.x or above. To check the full list of supported Python versions in Azure Functions, please visit
Python developer guide.
The Azure Functions Core Tools version 3.x.
Visual Studio Code installed on one of the supported platforms.
An active Azure subscription.
If you don't have an Azure subscription, create an Azure free account before you begin.
import logging
import memory_profiler
root_logger = logging.getLogger()
root_logger.handlers[0].setFormatter(logging.Formatter("%(name)s: %(message)s"))
profiler_logstream = memory_profiler.LogFile('memory_profiler_logs', True)
3. Apply the following decorator above any functions that need memory profiling. This does not work
directly on the trigger entrypoint main() method. You need to create subfunctions and decorate them.
Also, due to a memory-profiler known issue, when applying to an async coroutine, the coroutine return
value will always be None.
@memory_profiler.profile(stream=profiler_logstream)
4. Test the memory profiler on your local machine by using azure Functions Core Tools command
func host start . This should generate a memory usage report with file name, line of code, memory
usage, memory increment, and the line content in it.
5. To check the memory profiling logs on an existing function app instance in Azure, you can query the
memory profiling logs in recent invocations by pasting the following Kusto queries in Application
Insights, Logs.
traces
| where timestamp > ago(1d)
| where message startswith_cs "memory_profiler_logs:"
| parse message with "memory_profiler_logs: " LineNumber " " TotalMem_MiB " " IncreMem_MiB " " Occurences
" " Contents
| union (
traces
| where timestamp > ago(1d)
| where message startswith_cs "memory_profiler_logs: Filename: "
| parse message with "memory_profiler_logs: Filename: " FileName
| project timestamp, FileName, itemId
)
| project timestamp, LineNumber=iff(FileName != "", FileName, LineNumber), TotalMem_MiB, IncreMem_MiB,
Occurences, Contents, RequestId=itemId
| order by timestamp asc
Example
Here is an example of performing memory profiling on an asynchronous and a synchronous HTTP triggers,
named "HttpTriggerAsync" and "HttpTriggerSync" respectively. We will build a Python function app that simply
sends out GET requests to the Microsoft's home page.
Create a Python function app
A Python function app should follow Azure Functions specified folder structure. To scaffold the project, we
recommend using the Azure Functions Core Tools by running the following commands:
func init PythonMemoryProfilingDemo --python
cd PythonMemoryProfilingDemo
func new -l python -t HttpTrigger -n HttpTriggerAsync -a anonymous
func new -l python -t HttpTrigger -n HttpTriggerSync -a anonymous
# requirements.txt
azure-functions
memory-profiler
aiohttp
requests
We also need to rewrite the asynchronous HTTP trigger HttpTriggerAsync/__init__.py and configure the
memory profiler, root logger format, and logger streaming binding.
# HttpTriggerAsync/__init__.py
# Update root logger's format to include the logger name. Ensure logs generated
# from memory profiler can be filtered by "memory_profiler_logs" prefix.
root_logger = logging.getLogger()
root_logger.handlers[0].setFormatter(logging.Formatter("%(name)s: %(message)s"))
profiler_logstream = memory_profiler.LogFile('memory_profiler_logs', True)
@memory_profiler.profile(stream=profiler_logstream)
async def get_microsoft_page_async(url: str):
async with aiohttp.ClientSession() as client:
async with client.get(url) as response:
await response.text()
# @memory_profiler.profile does not support return for coroutines.
# All returns become None in the parent functions.
# GitHub Issue: https://github.com/pythonprofilers/memory_profiler/issues/289
For synchronous HTTP trigger, please refer to the following HttpTriggerSync/__init__.py code section:
# HttpTriggerSync/__init__.py
# Update root logger's format to include the logger name. Ensure logs generated
# from memory profiler can be filtered by "memory_profiler_logs" prefix.
root_logger = logging.getLogger()
root_logger.handlers[0].setFormatter(logging.Formatter("%(name)s: %(message)s"))
profiler_logstream = memory_profiler.LogFile('memory_profiler_logs', True)
@memory_profiler.profile(stream=profiler_logstream)
def profile_get_request(url: str):
response = requests.get(url)
return response.content
5. Start the Azure Functions runtime locally with Azure Functions Core Tools func host start
Filename: <ProjectRoot>\HttpTriggerAsync\__init__.py
Line # Mem usage Increment Occurences Line Contents
============================================================
19 45.1 MiB 45.1 MiB 1 @memory_profiler.profile
20 async def get_microsoft_page_async(url: str):
21 45.1 MiB 0.0 MiB 1 async with aiohttp.ClientSession() as client:
22 46.6 MiB 1.5 MiB 10 async with client.get(url) as response:
23 47.6 MiB 1.0 MiB 4 await response.text()
Next steps
For more information about Azure Functions Python development, see the following resources:
Azure Functions Python developer guide
Best practices for Azure Functions
Azure Functions developer reference
Azure Functions Core Tools reference
8/2/2022 • 18 minutes to read • Edit Online
This article provides reference documentation for the Azure Functions Core Tools, which lets you develop,
manage, and deploy Azure Functions projects from your local computer. To learn more about using Core Tools,
see Work with Azure Functions Core Tools.
Core Tools commands are organized into the following contexts, each providing a unique set of actions.
func settings Commands for managing environment settings for the local
Functions host.
Before using the commands in this article, you must install the Core Tools.
func init
Creates a new Functions project in a specific language.
When you supply <PROJECT_FOLDER> , the project is created in a new folder with this name. Otherwise, the current
folder is used.
func init supports the following options, which are version 3.x/2.x-only, unless otherwise noted:
O P T IO N DESC RIP T IO N
--force Initialize the project even when there are existing files in the
project. This setting overwrites existing files with the same
name. Other files in the project folder aren't affected.
--worker-runtime Sets the language runtime for the project. Supported values
are: csharp , dotnet , dotnet-isolated , javascript ,
node (JavaScript), powershell , python , and
typescript . For Java, use Maven. To generate a language-
agnostic project with just the project files, use custom .
When not set, you're prompted to choose your runtime
during initialization.
--target-framework Sets the target framework for the function app project. Valid
only with --worker-runtime dotnet-isolated . Supported
values are: net6.0 (default), net7.0 , and net48 .
NOTE
When you use either --docker or --dockerfile options, Core Tools automatically create the Dockerfile for C#,
JavaScript, Python, and PowerShell functions. For Java functions, you must manually create the Dockerfile. Use the Azure
Functions image list to find the correct base image for your container that runs Azure Functions.
func logs
Gets logs for functions running in a Kubernetes cluster.
func new
Creates a new function in the current project based on a template.
func new
O P T IO N DESC RIP T IO N
--authlevel Lets you set the authorization level for an HTTP trigger.
Supported values are: function , anonymous , admin .
Authorization isn't enforced when running locally. For more
information, see the HTTP binding article.
func run
Version 1.x only.
Enables you to invoke a function directly, which is similar to running a function using the Test tab in the Azure
portal. This action is only supported in version 1.x. For later versions, use func start and call the function
endpoint directly.
func run
--timeout Time to wait (in seconds) until the local Functions host is
ready.
For example, to call an HTTP-triggered function and pass content body, run the following command:
func start
Starts the local runtime host and loads the function project in the current folder.
The specific command depends on the runtime version.
v2.x+
v1.x
func start
O P T IO N DESC RIP T IO N
--cert The path to a .pfx file that contains a private key. Only
supported with --useHttps .
--dotnet-isolated-debug When set to true , pauses the .NET worker process until a
debugger is attached from the .NET isolated project being
debugged.
--no-build Don't build the current project before running. For .NET class
projects only. The default is false .
--password Either the password or a file that contains the password for a
.pfx file. Only used with --cert .
With the project running, you can verify individual function endpoints.
The default timeout for the connection is 2 hours. You can change the timeout by adding an app setting named
SCM_LOGSTREAM_TIMEOUT, with a timeout value in seconds. Not yet supported for Linux apps in the
Consumption plan. For these apps, use the --browser option to view logs in the portal.
The deploy action supports the following options:
O P T IO N DESC RIP T IO N
--browser Open Azure Application Insights Live Stream for the function
app in the default browser.
v2.x+
v1.x
O P T IO N DESC RIP T IO N
func deploy
The func deploy command is deprecated. Please instead use func kubernetes deploy .
O P T IO N DESC RIP T IO N
O P T IO N DESC RIP T IO N
O P T IO N DESC RIP T IO N
--connection-string-setting Optional name of the app setting that contains the storage
connection string to use.
--show-input When set, the response contains the input of the function.
O P T IO N DESC RIP T IO N
O P T IO N DESC RIP T IO N
--event-data Data to pass to the event, either inline or from a JSON file
(required). For files, prefix the path to the file with an
ampersand ( @ ), such as @path/to/file.json .
O P T IO N DESC RIP T IO N
O P T IO N DESC RIP T IO N
O P T IO N DESC RIP T IO N
O P T IO N DESC RIP T IO N
O P T IO N DESC RIP T IO N
Regenerates a missing extensions.csproj file. No action is taken when an extension bundle is defined in your
host.json file.
This command builds your project as a custom container and publishes it to a Kubernetes cluster. Custom
containers must have a Dockerfile. To create an app with a Dockerfile, use the --dockerfile option with the
func init command.
O P T IO N DESC RIP T IO N
--cooldown-period The cool-down period (in seconds) after all triggers are no
longer active before the deployment scales back down to
zero, with a default of 300 s.
--image-name The name of the image to use for the pod deployment and
from which to read functions.
--max-replicas Sets the maximum replica count for to which the Horizontal
Pod Autoscaler (HPA) scales.
--min-replicas Sets the minimum replica count below which HPA won't
scale.
--name The name used for the deployment and other artifacts in
Kubernetes.
--registry When set, a Docker build is run and the image is pushed to
a registry of that name. You can't use --registry with
--image-name . For Docker, use your username.
Core Tools uses the local Docker CLI to build and publish the image.
Make sure your Docker is already installed locally. Run the docker login command to connect to your account.
To learn more, see Deploying a function app to Kubernetes.
O P T IO N DESC RIP T IO N
Removes KEDA from the cluster defined in the kubectl config file.
The remove action supports the following options:
O P T IO N DESC RIP T IO N
Replace <SETTING_NAME> with the name of the app setting and <VALUE> with the value of the setting.
The add action supports the following option:
O P T IO N DESC RIP T IO N
Connection string values in the ConnectionStrings collection are also decrypted. In local.settings.json,
IsEncrypted is also set to false . Encrypt local settings to reduce the risk of leaking valuable information from
local.settings.json. In Azure, application settings are always stored encrypted.
Replace <SETTING_NAME> with the name of the app setting and <VALUE> with the value of the setting.
The delete action supports the following option:
O P T IO N DESC RIP T IO N
Connection string values in the ConnectionStrings collection are also encrypted. In local.settings.json,
IsEncrypted is also set to true , which specifies that the local runtime decrypts settings before using them.
Encrypt local settings to reduce the risk of leaking valuable information from local.settings.json. In Azure,
application settings are always stored encrypted.
Connection strings from the ConnectionStrings collection are also output. By default, values are masked for
security. You can use the --showValue option to display the actual value.
The list action supports the following option:
O P T IO N DESC RIP T IO N
O P T IO N DESC RIP T IO N
App settings in a function app contain configuration options that affect all functions for that function app. When
you run locally, these settings are accessed as local environment variables. This article lists the app settings that
are available in function apps.
There are several ways that you can add, update, and delete function app settings:
In the Azure portal.
By using the Azure CLI.
By using Azure PowerShell.
Changes to function app settings require your function app to be restarted.
There are other function app configuration options in the host.json file and in the local.settings.json file. Example
connection string values are truncated for readability.
NOTE
You can use application settings to override host.json setting values without having to change the host.json file itself. This
is helpful for scenarios where you need to configure or modify specific host.json settings for a specific environment. This
also lets you change host.json settings without having to republish your project. To learn more, see the host.json
reference article. Changes to function app settings require your function app to be restarted.
IMPORTANT
Do not use an instrumentation key and a connection string simultaneously. Whichever was set last will take precedence.
APPINSIGHTS_INSTRUMENTATIONKEY
The instrumentation key for Application Insights. Only use one of APPINSIGHTS_INSTRUMENTATIONKEY or
APPLICATIONINSIGHTS_CONNECTION_STRING . When Application Insights runs in a sovereign cloud, use
APPLICATIONINSIGHTS_CONNECTION_STRING . For more information, see How to configure monitoring for Azure
Functions.
K EY SA M P L E VA L UE
APPINSIGHTS_INSTRUMENTATIONKEY 55555555-af77-484b-9032-64f83bb83bb
NOTE
On March 31, 2025, support for instrumentation key ingestion will end. Instrumentation key ingestion will continue to
work, but we'll no longer provide updates or support for the feature. Transition to connection strings to take advantage of
new capabilities.
APPLICATIONINSIGHTS_CONNECTION_STRING
The connection string for Application Insights. Use APPLICATIONINSIGHTS_CONNECTION_STRING instead of
APPINSIGHTS_INSTRUMENTATIONKEY in the following cases:
When your function app requires the added customizations supported by using the connection string.
When your Application Insights instance runs in a sovereign cloud, which requires a custom endpoint.
For more information, see Connection strings.
K EY SA M P L E VA L UE
APPLICATIONINSIGHTS_CONNECTION_STRING InstrumentationKey=...
AZURE_FUNCTION_PROXY_DISABLE_LOCAL_CALL
By default, Functions proxies use a shortcut to send API calls from proxies directly to functions in the same
function app. This shortcut is used instead of creating a new HTTP request. This setting allows you to disable that
shortcut behavior.
K EY VA L UE DESC RIP T IO N
AZURE_FUNCTION_PROXY_BACKEND_URL_DECODE_SLASHES
This setting controls whether the characters %2F are decoded as slashes in route parameters when they are
inserted into the backend URL.
K EY VA L UE DESC RIP T IO N
For example, consider the proxies.json file for a function app at the myfunction.com domain.
{
"$schema": "http://json.schemastore.org/proxies",
"proxies": {
"root": {
"matchCondition": {
"route": "/{*all}"
},
"backendUri": "example.com/{all}"
}
}
}
AZURE_FUNCTIONS_ENVIRONMENT
In version 2.x and later versions of the Functions runtime, configures app behavior based on the runtime
environment. This value is read during initialization, and can be set to any value. Only the values of Development ,
Staging , and Production are honored by the runtime. When this application setting isn't present when running
in Azure, the environment is assumed to be Production . Use this setting instead of ASPNETCORE_ENVIRONMENT if
you need to change the runtime environment in Azure to something other than Production . The Azure
Functions Core Tools set AZURE_FUNCTIONS_ENVIRONMENT to Development when running on a local computer, and
this can't be overridden in the local.settings.json file. To learn more, see Environment-based Startup class and
methods.
AzureFunctionsJobHost__*
In version 2.x and later versions of the Functions runtime, application settings can override host.json settings in
the current environment. These overrides are expressed as application settings named
AzureFunctionsJobHost__path__to__setting . For more information, see Override host.json values.
AzureFunctionsWebHost__hostid
Sets the host ID for a given function app, which should be a unique ID. This setting overrides the automatically
generated host ID value for your app. Use this setting only when you need to prevent host ID collisions between
function apps that share the same storage account.
A host ID must be between 1 and 32 characters, contain only lowercase letters, numbers, and dashes, not start
or end with a dash, and not contain consecutive dashes. An easy way to generate an ID is to take a GUID, remove
the dashes, and make it lower case, such as by converting the GUID 1835D7B5-5C98-4790-815D-072CC94C6F71 to
the value 1835d7b55c984790815d072cc94c6f71 .
K EY SA M P L E VA L UE
AzureFunctionsWebHost__hostid myuniquefunctionappname123456789
AzureWebJobsDashboard
Optional storage account connection string for storing logs and displaying them in the Monitor tab in the
portal. This setting is only valid for apps that target version 1.x of the Azure Functions runtime. The storage
account must be a general-purpose one that supports blobs, queues, and tables. To learn more, see Storage
account requirements.
K EY SA M P L E VA L UE
AzureWebJobsDashboard DefaultEndpointsProtocol=https;AccountName=...
NOTE
For better performance and experience, runtime version 2.x and later versions use APPINSIGHTS_INSTRUMENTATIONKEY
and App Insights for monitoring instead of AzureWebJobsDashboard .
AzureWebJobsDisableHomepage
true means disable the default landing page that is shown for the root URL of a function app. Default is false .
K EY SA M P L E VA L UE
AzureWebJobsDisableHomepage true
When this app setting is omitted or set to false , a page similar to the following example is displayed in
response to the URL <functionappname>.azurewebsites.net .
AzureWebJobsDotNetReleaseCompilation
true means use Release mode when compiling .NET code; false means use Debug mode. Default is true .
K EY SA M P L E VA L UE
AzureWebJobsDotNetReleaseCompilation true
AzureWebJobsFeatureFlags
A comma-delimited list of beta features to enable. Beta features enabled by these flags are not production ready,
but can be enabled for experimental use before they go live.
K EY SA M P L E VA L UE
AzureWebJobsFeatureFlags feature1,feature2
AzureWebJobsKubernetesSecretName
Indicates the Kubernetes Secrets resource used for storing keys. Supported only when running in Kubernetes.
Requires that AzureWebJobsSecretStorageType be set to kubernetes . When AzureWebJobsKubernetesSecretName
isn't set, the repository is considered read-only. In this case, the values must be generated before deployment.
The Azure Functions Core Tools generates the values automatically when deploying to Kubernetes.
K EY SA M P L E VA L UE
AzureWebJobsKubernetesSecretName <SECRETS_RESOURCE>
AzureWebJobsSecretStorageKeyVaultClientId
The client ID of the user-assigned managed identity or the app registration used to access the vault where keys
are stored. Requires that AzureWebJobsSecretStorageType be set to keyvault . Supported in version 4.x and later
versions of the Functions runtime.
K EY SA M P L E VA L UE
AzureWebJobsSecretStorageKeyVaultClientId <CLIENT_ID>
AzureWebJobsSecretStorageKeyVaultClientSecret
The secret for client ID of the user-assigned managed identity or the app registration used to access the vault
where keys are stored. Requires that AzureWebJobsSecretStorageType be set to keyvault . Supported in version
4.x and later versions of the Functions runtime.
K EY SA M P L E VA L UE
AzureWebJobsSecretStorageKeyVaultClientSecret <CLIENT_SECRET>
AzureWebJobsSecretStorageKeyVaultName
The name of a key vault instance used to store keys. This setting is only supported for version 3.x of the
Functions runtime. For version 4.x, instead use AzureWebJobsSecretStorageKeyVaultUri . Requires that
AzureWebJobsSecretStorageType be set to keyvault .
The vault must have an access policy corresponding to the system-assigned managed identity of the hosting
resource. The access policy should grant the identity the following secret permissions: Get , Set , List , and
Delete .
When running locally, the developer identity is used, and settings must be in the local.settings.json file.
K EY SA M P L E VA L UE
AzureWebJobsSecretStorageKeyVaultName <VAULT_NAME>
AzureWebJobsSecretStorageKeyVaultTenantId
The tenant ID of the app registration used to access the vault where keys are stored. Requires that
AzureWebJobsSecretStorageType be set to keyvault . Supported in version 4.x and later versions of the Functions
runtime. To learn more, see Secret repositories.
K EY SA M P L E VA L UE
AzureWebJobsSecretStorageKeyVaultTenantId <TENANT_ID>
AzureWebJobsSecretStorageKeyVaultUri
The URI of a key vault instance used to store keys. Supported in version 4.x and later versions of the Functions
runtime. This is the recommended setting for using a key vault instance for key storage. Requires that
AzureWebJobsSecretStorageType be set to keyvault .
The AzureWebJobsSecretStorageKeyVaultUri value should be the full value of Vault URI displayed in the Key
Vault over view tab, including https:// .
The vault must have an access policy corresponding to the system-assigned managed identity of the hosting
resource. The access policy should grant the identity the following secret permissions: Get , Set , List , and
Delete .
When running locally, the developer identity is used, and settings must be in the local.settings.json file.
K EY SA M P L E VA L UE
AzureWebJobsSecretStorageKeyVaultUri https://<VAULT_NAME>.vault.azure.net
To learn more, see Use Key Vault references for Azure Functions.
AzureWebJobsSecretStorageSas
A Blob Storage SAS URL for a second storage account used for key storage. By default, Functions uses the
account set in AzureWebJobsStorage . When using this secret storage option, make sure that
AzureWebJobsSecretStorageType isn't explicitly set or is set to blob . To learn more, see Secret repositories.
K EY SA M P L E VA L UE
AzureWebJobsSecretStorageSas <BLOB_SAS_URL>
AzureWebJobsSecretStorageType
Specifies the repository or provider to use for key storage. Keys are always encrypted before being stored using
a secret unique to your function app.
K EY VA L UE DESC RIP T IO N
AzureWebJobsStorage
The Azure Functions runtime uses this storage account connection string for normal operation. Some uses of
this storage account include key management, timer trigger management, and Event Hubs checkpoints. The
storage account must be a general-purpose one that supports blobs, queues, and tables. See Storage account
and Storage account requirements.
K EY SA M P L E VA L UE
AzureWebJobsStorage DefaultEndpointsProtocol=https;AccountName=...
AzureWebJobs_TypeScriptPath
Path to the compiler used for TypeScript. Allows you to override the default if you need to.
K EY SA M P L E VA L UE
AzureWebJobs_TypeScriptPath %HOME%\typescript
DOCKER_SHM_SIZE
Sets the shared memory size (in bytes) when the Python worker is using shared memory. To learn more, see
Shared memory.
K EY SA M P L E VA L UE
DOCKER_SHM_SIZE 268435456
FUNCTION_APP_EDIT_MODE
Dictates whether editing in the Azure portal is enabled. Valid values are "readwrite" and "readonly".
K EY SA M P L E VA L UE
FUNCTION_APP_EDIT_MODE readonly
FUNCTIONS_EXTENSION_VERSION
The version of the Functions runtime that hosts your function app. A tilde ( ~ ) with major version means use the
latest version of that major version (for example, "~3"). When new versions for the same major version are
available, they are automatically installed in the function app. To pin the app to a specific version, use the full
version number (for example, "3.0.12345"). Default is "~3". A value of ~1 pins your app to version 1.x of the
runtime. For more information, see Azure Functions runtime versions overview. A value of ~4 means that your
app runs on version 4.x of the runtime, which supports .NET 6.0.
K EY SA M P L E VA L UE
FUNCTIONS_EXTENSION_VERSION ~3
FUNCTIONS_V2_COMPATIBILITY_MODE
This setting enables your function app to run in a version 2.x compatible mode on the version 3.x runtime. Use
this setting only if encountering issues when upgrading your function app from version 2.x to 3.x of the runtime.
IMPORTANT
This setting is intended only as a short-term workaround while you update your app to run correctly on version 3.x. This
setting is supported as long as the 2.x runtime is supported. If you encounter issues that prevent your app from running
on version 3.x without using this setting, please report your issue.
Requires that FUNCTIONS_EXTENSION_VERSION be set to ~3 .
K EY SA M P L E VA L UE
FUNCTIONS_V2_COMPATIBILITY_MODE true
FUNCTIONS_WORKER_PROCESS_COUNT
Specifies the maximum number of language worker processes, with a default value of 1 . The maximum value
allowed is 10 . Function invocations are evenly distributed among language worker processes. Language
worker processes are spawned every 10 seconds until the count set by
FUNCTIONS_WORKER_PROCESS_COUNT is reached. Using multiple language worker processes is not the same
as scaling. Consider using this setting when your workload has a mix of CPU-bound and I/O-bound invocations.
This setting applies to all language runtimes, except for .NET running in process ( dotnet ).
K EY SA M P L E VA L UE
FUNCTIONS_WORKER_PROCESS_COUNT 2
FUNCTIONS_WORKER_RUNTIME
The language worker runtime to load in the function app. This corresponds to the language being used in your
application (for example, dotnet ). Starting with version 2.x of the Azure Functions runtime, a given function app
can only support a single language.
K EY SA M P L E VA L UE
FUNCTIONS_WORKER_RUNTIME node
Valid values:
VA L UE L A N GUA GE
java Java
node JavaScript
TypeScript
powershell PowerShell
python Python
FUNCTIONS_WORKER_SHARED_MEMORY_DATA_TRANSFER_ENABLE
D
This setting enables the Python worker to use shared memory to improve throughput. Enable shared memory
when your Python function app is hitting memory bottlenecks.
K EY SA M P L E VA L UE
FUNCTIONS_WORKER_SHARED_MEMORY_DATA_TRANSFER 1
_ENABLED
With this setting enabled, you can use the DOCKER_SHM_SIZE setting to set the shared memory size. To learn
more, see Shared memory.
MDMaxBackgroundUpgradePeriod
Controls the managed dependencies background update period for PowerShell function apps, with a default
value of 7.00:00:00 (weekly).
Each PowerShell worker process initiates checking for module upgrades on the PowerShell Gallery on process
start and every MDMaxBackgroundUpgradePeriod after that. When a new module version is available in the
PowerShell Gallery, it's installed to the file system and made available to PowerShell workers. Decreasing this
value lets your function app get newer module versions sooner, but it also increases the app resource usage
(network I/O, CPU, storage). Increasing this value decreases the app's resource usage, but it may also delay
delivering new module versions to your app.
K EY SA M P L E VA L UE
MDMaxBackgroundUpgradePeriod 7.00:00:00
MDNewSnapshotCheckPeriod
Specifies how often each PowerShell worker checks whether managed dependency upgrades have been
installed. The default frequency is 01:00:00 (hourly).
After new module versions are installed to the file system, every PowerShell worker process must be restarted.
Restarting PowerShell workers affects your app availability as it can interrupt current function execution. Until all
PowerShell worker processes are restarted, function invocations may use either the old or the new module
versions. Restarting all PowerShell workers completes within MDNewSnapshotCheckPeriod .
Within every MDNewSnapshotCheckPeriod , the PowerShell worker checks whether or not managed dependency
upgrades have been installed. When upgrades have been installed, a restart is initiated. Increasing this value
decreases the frequency of interruptions because of restarts. However, the increase might also increase the time
during which function invocations could use either the old or the new module versions, non-deterministically.
K EY SA M P L E VA L UE
MDNewSnapshotCheckPeriod 01:00:00
MDMinBackgroundUpgradePeriod
The period of time after a previous managed dependency upgrade check before another upgrade check is
started, with a default of 1.00:00:00 (daily).
To avoid excessive module upgrades on frequent Worker restarts, checking for module upgrades isn't
performed when any worker has already initiated that check in the last MDMinBackgroundUpgradePeriod .
K EY SA M P L E VA L UE
MDMinBackgroundUpgradePeriod 1.00:00:00
PIP_INDEX_URL
This setting lets you override the base URL of the Python Package Index, which by default is
https://pypi.org/simple . Use this setting when you need to run a remote build using custom dependencies that
are found in a package index repository compliant with PEP 503 (the simple repository API) or in a local
directory that follows the same format.
K EY SA M P L E VA L UE
PIP_INDEX_URL http://my.custom.package.repo/simple
To learn more, see pip documentation for --index-url and using Custom dependencies in the Python
developer reference.
PIP_EXTRA_INDEX_URL
The value for this setting indicates a extra index URL for custom packages for Python apps, to use in addition to
the --index-url . Use this setting when you need to run a remote build using custom dependencies that are
found in an extra package index. Should follow the same rules as --index-url.
K EY SA M P L E VA L UE
PIP_EXTRA_INDEX_URL http://my.custom.package.repo/simple
To learn more, see pip documentation for --extra-index-url and Custom dependencies in the Python
developer reference.
PYTHON_ISOLATE_WORKER_DEPENDENCIES (Preview)
The configuration is specific to Python function apps. It defines the prioritization of module loading order. When
your Python function apps face issues related to module collision (e.g. when you're using protobuf, tensorflow,
or grpcio in your project), configuring this app setting to 1 should resolve your issue. By default, this value is
set to 0 . This flag is currently in Preview.
K EY VA L UE DESC RIP T IO N
PYTHON_ENABLE_DEBUG_LOGGING
Enables debug-level logging in a Python function app. A value of 1 enables debug-level logging. Without this
setting or with a value of 0 , only information and higher level logs are sent from the Python worker to the
Functions host. Use this setting when debugging or tracing your Python function executions.
When debugging Python functions, make sure to also set a debug or trace logging level in the host.json file, as
needed. To learn more, see How to configure monitoring for Azure Functions.
PYTHON_ENABLE_WORKER_EXTENSIONS
The configuration is specific to Python function apps. Setting this to 1 allows the worker to load in Python
worker extensions defined in requirements.txt. It enables your function app to access new features provided by
third-party packages. It may also change the behavior of function load and invocation in your app. Please ensure
the extension you choose is trustworthy as you bear the risk of using it. Azure Functions gives no express
warranties to any extensions. For how to use an extension, please visit the extension's manual page or readme
doc. By default, this value sets to 0 .
K EY VA L UE DESC RIP T IO N
PYTHON_THREADPOOL_THREAD_COUNT
Specifies the maximum number of threads that a Python language worker would use to execute function
invocations, with a default value of 1 for Python version 3.8 and below. For Python version 3.9 and above,
the value is set to None . Note that this setting does not guarantee the number of threads that would be set
during executions. The setting allows Python to expand the number of threads to the specified value. The setting
only applies to Python functions apps. Additionally, the setting applies to synchronous functions invocation and
not for coroutines.
K EY SA M P L E VA L UE M A X VA L UE
PYTHON_THREADPOOL_THREAD_CO 2 32
UNT
SCALE_CONTROLLER_LOGGING_ENABLED
This setting is currently in preview.
This setting controls logging from the Azure Functions scale controller. For more information, see Scale
controller logs.
K EY SA M P L E VA L UE
SCALE_CONTROLLER_LOGGING_ENABLED AppInsights:Verbose
The value for this key is supplied in the format <DESTINATION>:<VERBOSITY> , which is defined as follows:
<DESTINATION> The destination to which logs are sent. Valid values are
AppInsights and Blob .
When you use AppInsights , ensure that the Application
Insights is enabled in your function app.
When you set the destination to Blob , logs are created in a
blob container named
azure-functions-scale-controller in the default storage
account set in the AzureWebJobsStorage application
setting.
TIP
Keep in mind that while you leave scale controller logging enabled, it impacts the potential costs of monitoring your
function app. Consider enabling logging until you have collected enough data to understand how the scale controller is
behaving, and then disabling it.
SCM_LOGSTREAM_TIMEOUT
Controls the timeout, in seconds, when connected to streaming logs. The default value is 7200 (2 hours).
K EY SA M P L E VA L UE
SCM_LOGSTREAM_TIMEOUT 1800
The above sample value of 1800 sets a timeout of 30 minutes. To learn more, see Enable streaming logs.
WEBSITE_CONTENTAZUREFILECONNECTIONSTRING
Connection string for storage account where the function app code and configuration are stored in event-driven
scaling plans. For more information, see Create a function app.
K EY SA M P L E VA L UE
WEBSITE_CONTENTAZUREFILECONNECTIONSTRING DefaultEndpointsProtocol=https;AccountName=...
This setting is required for Consumption and Premium plan apps on both Windows and Linux. It's not required
for Dedicated plan apps, which aren't dynamically scaled by Functions.
Changing or removing this setting may cause your function app to not start. To learn more, see this
troubleshooting article.
WEBSITE_CONTENTOVERVNET
A value of 1 enables your function app to scale when you have your storage account restricted to a virtual
network. You should enable this setting when restricting your storage account to a virtual network. To learn
more, see Restrict your storage account to a virtual network.
K EY SA M P L E VA L UE
WEBSITE_CONTENTOVERVNET 1
Supported on Premium and Dedicated (App Service) plans (Standard and higher). Not supported when running
on a Consumption plan.
WEBSITE_CONTENTSHARE
The file path to the function app code and configuration in an event-driven scaling plans. Used with
WEBSITE_CONTENTAZUREFILECONNECTIONSTRING. Default is a unique string generated by the runtime that
begins with the function app name. See Create a function app.
K EY SA M P L E VA L UE
WEBSITE_CONTENTSHARE functionapp091999e2
This setting is required for Consumption and Premium plan apps on both Windows and Linux. It's not required
for Dedicated plan apps, which aren't dynamically scaled by Functions.
Changing or removing this setting may cause your function app to not start. To learn more, see this
troubleshooting article.
The following considerations apply when using an Azure Resource Manager (ARM) template to create a function
app during deployment:
When you don't set a WEBSITE_CONTENTSHARE value for the main function app or any apps in slots, unique
share values are generated for you. This is the recommended approach for an ARM template deployment.
There are scenarios where you must set the WEBSITE_CONTENTSHARE value to a predefined share, such as when
you use a secured storage account in a virtual network. In this case, you must set a unique share name for
the main function app and the app for each deployment slot.
Don't make WEBSITE_CONTENTSHARE a slot setting.
When you specify WEBSITE_CONTENTSHARE , the value must follow this guidance for share names.
To learn more, see Automate resource deployment for your function app.
WEBSITE_DNS_SERVER
Sets the DNS server used by an app when resolving IP addresses. This setting is often required when using
certain networking functionality, such as Azure DNS private zones and private endpoints.
K EY SA M P L E VA L UE
WEBSITE_DNS_SERVER 168.63.129.16
WEBSITE_ENABLE_BROTLI_ENCODING
Controls whether Brotli encoding is used for compression instead of the default gzip compression. When
WEBSITE_ENABLE_BROTLI_ENCODING is set to 1 , Brotli encoding is used; otherwise gzip encoding is used.
WEBSITE_MAX_DYNAMIC_APPLICATION_SCALE_OUT
The maximum number of instances that the app can scale out to. Default is no limit.
IMPORTANT
This setting is in preview. An app property for function max scale out has been added and is the recommended way to
limit scale out.
K EY SA M P L E VA L UE
WEBSITE_MAX_DYNAMIC_APPLICATION_SCALE_OUT 5
WEBSITE_NODE_DEFAULT_VERSION
Windows only. Sets the version of Node.js to use when running your function app on Windows. You should use
a tilde (~) to have the runtime use the latest available version of the targeted major version. For example, when
set to ~10 , the latest version of Node.js 10 is used. When a major version is targeted with a tilde, you don't have
to manually update the minor version.
K EY SA M P L E VA L UE
WEBSITE_NODE_DEFAULT_VERSION ~10
WEBSITE_OVERRIDE_STICKY_EXTENSION_VERSIONS
By default, the version settings for function apps are specific to each slot. This setting is used when upgrading
functions by using deployment slots. This prevents unanticipated behavior due to changing versions after a
swap. Set to 0 in production and in the slot to make sure that all version settings are also swapped. For more
information, see Migrate using slots.
K EY SA M P L E VA L UE
WEBSITE_OVERRIDE_STICKY_EXTENSION_VERSIONS 0
WEBSITE_RUN_FROM_PACKAGE
Enables your function app to run from a mounted package file.
K EY SA M P L E VA L UE
WEBSITE_RUN_FROM_PACKAGE 1
Valid values are either a URL that resolves to the location of a deployment package file, or 1 . When set to 1 ,
the package must be in the d:\home\data\SitePackages folder. When using zip deployment with this setting, the
package is automatically uploaded to this location. In preview, this setting was named WEBSITE_RUN_FROM_ZIP . For
more information, see Run your functions from a package file.
WEBSITE_SKIP_CONTENTSHARE_VALIDATION
The WEBSITE_CONTENTAZUREFILECONNECTIONSTRING and WEBSITE_CONTENTSHARE settings have
additional validation checks to ensure that the app can be properly started. Creation of application settings will
fail if the Function App cannot properly call out to the downstream Storage Account or Key Vault due to
networking constraints or other limiting factors. When WEBSITE_SKIP_CONTENTSHARE_VALIDATION is set to
1 , the validation check is skipped; otherwise the value defaults to 0 and the validation will take place.
K EY SA M P L E VA L UE
WEBSITE_SKIP_CONTENTSHARE_VALIDATION 1
If validation is skipped and either the connection string or content share are not valid, the app will be unable to
start properly and will only serve HTTP 500 errors.
WEBSITE_TIME_ZONE
Allows you to set the timezone for your function app.
K EY OS SA M P L E VA L UE
The default time zone used with the CRON expressions is Coordinated Universal Time (UTC). To have your
CRON expression based on another time zone, create an app setting for your function app named
WEBSITE_TIME_ZONE .
The value of this setting depends on the operating system and plan on which your function app runs.
O P ERAT IN G SY ST EM PLAN VA L UE
For example, Eastern Time in the US (represented by Eastern Standard Time (Windows) or America/New_York
(Linux)) currently uses UTC-05:00 during standard time and UTC-04:00 during daylight time. To have a timer
trigger fire at 10:00 AM Eastern Time every day, create an app setting for your function app named
WEBSITE_TIME_ZONE , set the value to Eastern Standard Time (Windows) or America/New_York (Linux), and then
use the following NCRONTAB expression:
"0 0 10 * * *"
When you use WEBSITE_TIME_ZONE the time is adjusted for time changes in the specific timezone, including
daylight saving time and changes in standard time.
WEBSITE_VNET_ROUTE_ALL
IMPORTANT
WEBSITE_VNET_ROUTE_ALL is a legacy app setting that has been replaced by the vnetRouteAllEnabled configuration
setting.
Indicates whether all outbound traffic from the app is routed through the virtual network. A setting value of 1
indicates that all traffic is routed through the virtual network. You need this setting when using features of
Regional virtual network integration. It's also used when a virtual network NAT gateway is used to define a static
outbound IP address.
K EY SA M P L E VA L UE
WEBSITE_VNET_ROUTE_ALL 1
Next steps
Learn how to update app settings
See configuration settings in the host.json file
See other app settings for App Service apps
Azure Blob storage bindings for Azure Functions
overview
8/2/2022 • 6 minutes to read • Edit Online
Azure Functions integrates with Azure Storage via triggers and bindings. Integrating with Blob storage allows
you to build functions that react to changes in blob data as well as read and write values.
A C T IO N TYPE
Install extension
The extension NuGet package you install depends on the C# mode you're using in your function app:
In-process
Isolated process
C# script
Functions execute in the same process as the Functions host. To learn more, see Develop C# class library
functions using Azure Functions.
The functionality of the extension varies depending on the extension version:
Extension 5.x and higher
Functions 2.x and higher
Functions 1.x
This version introduces the ability to connect using an identity instead of a secret. For a tutorial on configuring
your function apps with managed identities, see the creating a function app with identity-based connections
tutorial.
This version allows you to bind to types from Azure.Storage.Blobs. Learn more about how these new types are
different from WindowsAzure.Storage and Microsoft.Azure.Storage and how to migrate to them from the
Azure.Storage.Blobs Migration Guide.
This extension is available by installing the Microsoft.Azure.WebJobs.Extensions.Storage.Blobs NuGet package,
version 5.x.
Using the .NET CLI:
Install bundle
The Blob storage binding is part of an extension bundle, which is specified in your host.json project file. You may
need to modify this bundle to change the version of the binding, or if bundles aren't already installed. To learn
more, see extension bundle.
Bundle v3.x
Bundle v2.x
Functions 1.x
This version introduces the ability to connect using an identity instead of a secret. For a tutorial on configuring
your function apps with managed identities, see the creating a function app with identity-based connections
tutorial.
You can add this version of the extension from the extension bundle v3 by adding or replacing the following
code in your host.json file:
{
"version": "2.0",
"extensionBundle": {
"id": "Microsoft.Azure.Functions.ExtensionBundle",
"version": "[3.3.0, 4.0.0)"
}
}
NOTE
Version 3.x of the extension bundle currently does not include the Table Storage bindings. If your app requires Table
Storage, you will need to continue using the 2.x version for now.
host.json settings
This section describes the function app configuration settings available for functions that use this binding. These
settings only apply when using extension version 5.0.0 and higher. The example host.json file below contains
only the version 2.x+ settings for this binding. For more information about function app configuration settings
in versions 2.x and later versions, see host.json reference for Azure Functions.
NOTE
This section doesn't apply to extension versions before 5.0.0. For those earlier versions, there aren't any function app-wide
configuration settings for blobs.
{
"version": "2.0",
"extensions": {
"blobs": {
"maxDegreeOfParallelism": 4
}
}
}
Next steps
Run a function when blob storage data changes
Read blob storage data when a function runs
Write blob storage data from a function
Azure Blob storage trigger for Azure Functions
8/2/2022 • 22 minutes to read • Edit Online
The Blob storage trigger starts a function when a new or updated blob is detected. The blob contents are
provided as input to the function.
The Azure Blob storage trigger requires a general-purpose storage account. Storage V2 accounts with
hierarchical namespaces are also supported. To use a blob-only account, or if your application has specialized
needs, review the alternatives to using this trigger.
For information on setup and configuration details, see the overview.
Example
A C# function can be created using one of the following C# modes:
In-process class library: compiled C# function that runs in the same process as the Functions runtime.
Isolated process class library: compiled C# function that runs in a process isolated from the runtime. Isolated
process is required to support C# functions running on .NET 5.0.
C# script: used primarily when creating C# functions in the Azure portal.
In-process
Isolated process
C# Script
The following example shows a C# function that writes a log when a blob is added or updated in the
samples-workitems container.
[FunctionName("BlobTriggerCSharp")]
public static void Run([BlobTrigger("samples-workitems/{name}")] Stream myBlob, string name, ILogger log)
{
log.LogInformation($"C# Blob trigger function Processed blob\n Name:{name} \n Size: {myBlob.Length}
Bytes");
}
The string {name} in the blob trigger path samples-workitems/{name} creates a binding expression that you can
use in function code to access the file name of the triggering blob. For more information, see Blob name
patterns later in this article.
For more information about the BlobTrigger attribute, see Attributes.
This function writes a log when a blob is added or updated in the myblob container.
@FunctionName("blobprocessor")
public void run(
@BlobTrigger(name = "file",
dataType = "binary",
path = "myblob/{name}",
connection = "MyStorageAccountAppSetting") byte[] content,
@BindingName("name") String filename,
final ExecutionContext context
) {
context.getLogger().info("Name: " + filename + " Size: " + content.length + " bytes");
}
The following example shows a blob trigger binding in a function.json file and JavaScript code that uses the
binding. The function writes a log when a blob is added or updated in the samples-workitems container.
Here's the function.json file:
{
"disabled": false,
"bindings": [
{
"name": "myBlob",
"type": "blobTrigger",
"direction": "in",
"path": "samples-workitems/{name}",
"connection":"MyStorageAccountAppSetting"
}
]
}
The string {name} in the blob trigger path samples-workitems/{name} creates a binding expression that you can
use in function code to access the file name of the triggering blob. For more information, see Blob name
patterns later in this article.
For more information about function.json file properties, see the Configuration section explains these
properties.
Here's the JavaScript code:
The following example demonstrates how to create a function that runs when a file is added to source blob
storage container.
The function configuration file (function.json) includes a binding with the type of blobTrigger and direction
set to in .
{
"bindings": [
{
"name": "InputBlob",
"type": "blobTrigger",
"direction": "in",
"path": "source/{name}",
"connection": "MyStorageAccountConnectionString"
}
]
}
The following example shows a blob trigger binding in a function.json file and Python code that uses the
binding. The function writes a log when a blob is added or updated in the samples-workitems container.
Here's the function.json file:
{
"scriptFile": "__init__.py",
"disabled": false,
"bindings": [
{
"name": "myblob",
"type": "blobTrigger",
"direction": "in",
"path": "samples-workitems/{name}",
"connection":"MyStorageAccountAppSetting"
}
]
}
The string {name} in the blob trigger path samples-workitems/{name} creates a binding expression that you can
use in function code to access the file name of the triggering blob. For more information, see Blob name
patterns later in this article.
For more information about function.json file properties, see the Configuration section explains these
properties.
Here's the Python code:
import logging
import azure.functions as func
Attributes
Both in-process and isolated process C# libraries use the BlobAttribute attribute to define the function. C# script
instead uses a function.json configuration file.
The attribute's constructor takes the following parameters:
PA RA M ET ER DESC RIP T IO N
In-process
Isolated process
C# script
In C# class libraries, the attribute's constructor takes a path string that indicates the container to watch and
optionally a blob name pattern. Here's an example:
[FunctionName("ResizeImage")]
public static void Run(
[BlobTrigger("sample-images/{name}")] Stream image,
[Blob("sample-images-md/{name}", FileAccess.Write)] Stream imageSmall)
{
....
}
While the attribute takes a Connection property, you can also use the StorageAccountAttribute to specify a
storage account connection. You can do this when you need to use a different storage account than other
functions in the library. The constructor takes the name of an app setting that contains a storage connection
string. The attribute can be applied at the parameter, method, or class level. The following example shows class
level and method level:
[StorageAccount("ClassLevelStorageAppSetting")]
public static class AzureFunctions
{
[FunctionName("StorageTrigger")]
[StorageAccount("FunctionLevelStorageAppSetting")]
public static void Run( //...
{
...
}
Configuration
The following table explains the binding configuration properties that you set in the function.json file.
name The name of the variable that represents the blob in function
code.
Metadata
The blob trigger provides several metadata properties. These properties can be used as part of binding
expressions in other bindings or as parameters in your code. These values have the same semantics as the
CloudBlob type.
The following example logs the path to the triggering blob, including the container:
Metadata can be obtained from the bindingData property of the supplied context object, as shown in the
following example, which logs the path to the triggering blob ( blobTrigger ), including the container:
Metadata
Metadata is available through the $TriggerMetadata parameter.
Usage
The usage of the Blob trigger depends on the extension package version, and the C# modality used in your
function app, which can be one of the following:
An in-process class library is a compiled C# function runs in the same process as the Functions runtime.
Choose a version to see usage details for the mode and version.
The following parameter types are extension version-specific and require FileAccess.ReadWrite in your C# class
library:
BlobClient
BlockBlobClient
PageBlobClient
AppendBlobClient
BlobBaseClient
For examples using these types, see the GitHub repository for the extension. Learn more about these new types
are different and how to migrate to them from the Azure.Storage.Blobs Migration Guide.
You can also use the StorageAccountAttribute to specify the storage account to use. You can do this when you
need to use a different storage account than other functions in the library. The constructor takes the name of an
app setting that contains a storage connection string. The attribute can be applied at the parameter, method, or
class level. The following example shows class level and method level:
[StorageAccount("ClassLevelStorageAppSetting")]
public static class AzureFunctions
{
[FunctionName("BlobTrigger")]
[StorageAccount("FunctionLevelStorageAppSetting")]
public static void Run( //...
{
....
}
Connections
The connection property is a reference to environment configuration which specifies how the app should
connect to Azure Blobs. It may specify:
The name of an application setting containing a connection string
The name of a shared prefix for multiple application settings, together defining an identity-based connection.
If the configured value is both an exact match for a single setting and a prefix match for other settings, the exact
match is used.
Connection string
To obtain a connection string, follow the steps shown at Manage storage account access keys. The connection
string must be for a general-purpose storage account, not a Blob storage account.
This connection string should be stored in an application setting with a name matching the value specified by
the connection property of the binding configuration.
If the app setting name begins with "AzureWebJobs", you can specify only the remainder of the name here. For
example, if you set connection to "MyStorage", the Functions runtime looks for an app setting that is named
"AzureWebJobsMyStorage." If you leave connection empty, the Functions runtime uses the default Storage
connection string in the app setting that is named AzureWebJobsStorage .
Identity-based connections
If you are using version 5.x or higher of the extension, instead of using a connection string with a secret, you can
have the app use an Azure Active Directory identity. To do this, you would define settings under a common
prefix which maps to the connection property in the trigger and binding configuration.
If you are setting connection to "AzureWebJobsStorage", see Connecting to host storage with an identity. For all
other connections, the extension requires the following properties:
EN VIRO N M EN T VA RIA B L E
P RO P ERT Y T EM P L AT E DESC RIP T IO N EXA M P L E VA L UE
EN VIRO N M EN T VA RIA B L E
P RO P ERT Y T EM P L AT E DESC RIP T IO N EXA M P L E VA L UE
IMPORTANT
Some permissions might be exposed by the target service that are not necessary for all contexts. Where possible, adhere
to the principle of least privilege , granting the identity only required privileges. For example, if the app only needs to
be able to read from a data source, use a role that only has permission to read. It would be inappropriate to assign a role
that also allows writing to that service, as this would be excessive permission for a read operation. Similarly, you would
want to ensure the role assignment is scoped only over the resources that need to be read.
You will need to create a role assignment that provides access to your blob container at runtime. Management
roles like Owner are not sufficient. The following table shows built-in roles that are recommended when using
the Blob Storage extension in normal operation. Your application may require additional permissions based on
the code you write.
1 The blob triggerhandles failure across multiple retries by writing poison blobs to a queue on the storage
account specified by the connection.
2 The AzureWebJobsStorage connection is used internally for blobs and queues that enable the trigger. If it is
configured to use an identity-based connection, it will need additional permissions beyond the default
requirement. These are covered by the Storage Blob Data Owner, Storage Queue Data Contributor, and Storage
Account Contributor roles. To learn more, see Connecting to host storage with an identity.
"path": "input/{blobname}.{blobextension}",
If the blob is named original-Blob1.txt, the values of the blobname and blobextension variables in function code
are original-Blob1 and txt.
Filter on blob name
The following example triggers only on blobs in the input container that start with the string "original-":
"path": "input/original-{name}",
If the blob name is original-Blob1.txt, the value of the name variable in function code is Blob1.txt .
Filter on file type
The following example triggers only on .png files:
"path": "samples/{name}.png",
"path": "images/{{20140101}}-{name}",
If the blob is named {20140101}-soundfile.mp3, the name variable value in the function code is soundfile.mp3.
Polling
Polling works as a hybrid between inspecting logs and running periodic container scans. Blobs are scanned in
groups of 10,000 at a time with a continuation token used between intervals.
WARNING
In addition, storage logs are created on a "best effort" basis. There's no guarantee that all events are captured. Under
some conditions, logs may be missed.
If you require faster or more reliable blob processing, consider creating a queue message when you create the blob. Then
use a queue trigger instead of a blob trigger to process the blob. Another option is to use Event Grid; see the tutorial
Automate resizing uploaded images using Event Grid.
Alternatives
Event Grid trigger
NOTE
When using Storage Extensions 5.x and higher, the Blob trigger has built-in support for an Event Grid based Blob trigger.
For more information, see the Storage extension 5.x and higher section below.
The Event Grid trigger also has built-in support for blob events. Use Event Grid instead of the Blob storage
trigger for the following scenarios:
Blob-only storage accounts : Blob-only storage accounts are supported for blob input and output
bindings but not for blob triggers.
High-scale : High scale can be loosely defined as containers that have more than 100,000 blobs in them
or storage accounts that have more than 100 blob updates per second.
Existing Blobs : The blob trigger will process all existing blobs in the container when you set up the
trigger. If you have a container with many existing blobs and only want to trigger for new blobs, use the
Event Grid trigger.
Minimizing latency : If your function app is on the Consumption plan, there can be up to a 10-minute
delay in processing new blobs if a function app has gone idle. To avoid this latency, you can switch to an
App Service plan with Always On enabled. You can also use an Event Grid trigger with your Blob storage
account. For an example, see the Event Grid tutorial.
See the Image resize with Event Grid tutorial of an Event Grid example.
Storage Extension 5.x and higher
When using the storage extension, there is built-in support for Event Grid in the Blob trigger, which requires
setting the source parameter to Event Grid in your existing Blob trigger.
For more information on how to use the Blob Trigger based on Event Grid, refer to the Event Grid Blob Trigger
guide.
Queue storage trigger
Another approach to processing blobs is to write queue messages that correspond to blobs being created or
modified and then use a Queue storage trigger to begin processing.
Blob receipts
The Azure Functions runtime ensures that no blob trigger function gets called more than once for the same new
or updated blob. To determine if a given blob version has been processed, it maintains blob receipts.
Azure Functions stores blob receipts in a container named azure-webjobs-hosts in the Azure storage account for
your function app (defined by the app setting AzureWebJobsStorage ). A blob receipt has the following
information:
The triggered function ( <FUNCTION_APP_NAME>.Functions.<FUNCTION_NAME> , for example:
MyFunctionApp.Functions.CopyBlob )
The container name
The blob type ( BlockBlob or PageBlob )
The blob name
The ETag (a blob version identifier, for example: 0x8D1DC6E70A277EF )
To force reprocessing of a blob, delete the blob receipt for that blob from the azure-webjobs-hosts container
manually. While reprocessing might not occur immediately, it's guaranteed to occur at a later point in time. To
reprocess immediately, the scaninfo blob in azure-webjobs-hosts/blobscaninfo can be updated. Any blobs with a
last modified timestamp after the LatestScan property will be scanned again.
Poison blobs
When a blob trigger function fails for a given blob, Azure Functions retries that function a total of five times by
default.
If all 5 tries fail, Azure Functions adds a message to a Storage queue named webjobs-blobtrigger-poison. The
maximum number of retries is configurable. The same MaxDequeueCount setting is used for poison blob
handling and poison queue message handling. The queue message for poison blobs is a JSON object that
contains the following properties:
FunctionId (in the format <FUNCTION_APP_NAME>.Functions.<FUNCTION_NAME> )
BlobType ( BlockBlob or PageBlob )
ContainerName
BlobName
ETag (a blob version identifier, for example: 0x8D1DC6E70A277EF )
NOTE
For apps using the 5.0.0 or higher version of the Storage extension, the queues configuration in host.json only applies to
queue triggers. The blob trigger concurrency is instead controlled by blobs configuration in host.json.
The Consumption plan limits a function app on one virtual machine (VM) to 1.5 GB of memory. Memory is used
by each concurrently executing function instance and by the Functions runtime itself. If a blob-triggered function
loads the entire blob into memory, the maximum memory used by that function just for blobs is 24 * maximum
blob size. For example, a function app with three blob-triggered functions and the default settings would have a
maximum per-VM concurrency of 3*24 = 72 function invocations.
JavaScript and Java functions load the entire blob into memory, and C# functions do that if you bind to string ,
or Byte[] .
host.json properties
The host.json file contains settings that control blob trigger behavior. See the host.json settings section for
details regarding available settings.
Next steps
Read blob storage data when a function runs
Write blob storage data from a function
Azure Blob storage input binding for Azure
Functions
8/2/2022 • 17 minutes to read • Edit Online
The input binding allows you to read blob storage data as input to an Azure Function.
For information on setup and configuration details, see the overview.
Example
A C# function can be created using one of the following C# modes:
In-process class library: compiled C# function that runs in the same process as the Functions runtime.
Isolated process class library: compiled C# function that runs in a process isolated from the runtime. Isolated
process is required to support C# functions running on .NET 5.0.
C# script: used primarily when creating C# functions in the Azure portal.
In-process
Isolated process
C# Script
The following example is a [C# function](functions-dotnet-class-library.md) that uses a queue trigger and an
input blob binding. The queue message contains the name of the blob, and the function logs the size of the
blob.
[FunctionName("BlobInput")]
public static void Run(
[QueueTrigger("myqueue-items")] string myQueueItem,
[Blob("samples-workitems/{queueTrigger}", FileAccess.Read)] Stream myBlob,
ILogger log)
{
log.LogInformation($"BlobInput processed blob\n Name:{myQueueItem} \n Size: {myBlob.Length} bytes");
}
@FunctionName("getBlobSize")
@StorageAccount("Storage_Account_Connection_String")
public void blobSize(
@QueueTrigger(
name = "filename",
queueName = "myqueue-items-sample")
String filename,
@BlobInput(
name = "file",
dataType = "binary",
path = "samples-workitems/{queueTrigger}")
byte[] content,
final ExecutionContext context) {
context.getLogger().info("The size of \"" + filename + "\" is: " + content.length + " bytes");
}
In the Java functions runtime library, use the @BlobInput annotation on parameters whose value would come
from a blob. This annotation can be used with native Java types, POJOs, or nullable values using Optional<T> .
The following example shows blob input and output bindings in a function.json file and JavaScript code that
uses the bindings. The function makes a copy of a blob. The function is triggered by a queue message that
contains the name of the blob to copy. The new blob is named {originalblobname}-Copy.
In the function.json file, the queueTrigger metadata property is used to specify the blob name in the path
properties:
{
"bindings": [
{
"queueName": "myqueue-items",
"connection": "MyStorageConnectionAppSetting",
"name": "myQueueItem",
"type": "queueTrigger",
"direction": "in"
},
{
"name": "myInputBlob",
"type": "blob",
"path": "samples-workitems/{queueTrigger}",
"connection": "MyStorageConnectionAppSetting",
"direction": "in"
},
{
"name": "myOutputBlob",
"type": "blob",
"path": "samples-workitems/{queueTrigger}-Copy",
"connection": "MyStorageConnectionAppSetting",
"direction": "out"
}
],
"disabled": false
}
The following example shows a blob input binding, defined in the function.json file, which makes the incoming
blob data available to the PowerShell function.
Here's the json configuration:
{
"bindings": [
{
"name": "InputBlob",
"type": "blobTrigger",
"direction": "in",
"path": "source/{name}",
"connection": "AzureWebJobsStorage"
}
]
}
The following example shows blob input and output bindings in a function.json file and Python code that uses
the bindings. The function makes a copy of a blob. The function is triggered by a queue message that contains
the name of the blob to copy. The new blob is named {originalblobname}-Copy.
In the function.json file, the queueTrigger metadata property is used to specify the blob name in the path
properties:
{
"bindings": [
{
"queueName": "myqueue-items",
"connection": "MyStorageConnectionAppSetting",
"name": "queuemsg",
"type": "queueTrigger",
"direction": "in"
},
{
"name": "inputblob",
"type": "blob",
"dataType": "binary",
"path": "samples-workitems/{queueTrigger}",
"connection": "MyStorageConnectionAppSetting",
"direction": "in"
},
{
"name": "$return",
"type": "blob",
"path": "samples-workitems/{queueTrigger}-Copy",
"connection": "MyStorageConnectionAppSetting",
"direction": "out"
}
],
"disabled": false,
"scriptFile": "__init__.py"
}
If the dataType property is not defined in function.json, the default value is string .
Here's the Python code:
import logging
import azure.functions as func
# The input binding field inputblob can either be 'bytes' or 'str' depends
# on dataType in function.json, 'binary' or 'string'.
def main(queuemsg: func.QueueMessage, inputblob: bytes) -> bytes:
logging.info(f'Python Queue trigger function processed {len(inputblob)} bytes')
return inputblob
Attributes
Both in-process and isolated process C# libraries use attributes to define the function. C# script instead uses a
function.json configuration file.
In-process
Isolated process
C# script
In C# class libraries, use the BlobAttribute, which takes the following parameters:
PA RA M ET ER DESC RIP T IO N
The following example shows how the attribute's constructor takes the path to the blob and a FileAccess
parameter indicating read for the input binding:
[FunctionName("BlobInput")]
public static void Run(
[QueueTrigger("myqueue-items")] string myQueueItem,
[Blob("samples-workitems/{queueTrigger}", FileAccess.Read)] Stream myBlob,
ILogger log)
{
log.LogInformation($"BlobInput processed blob\n Name:{myQueueItem} \n Size: {myBlob.Length} bytes");
}
While the attribute takes a Connection property, you can also use the StorageAccountAttribute to specify a
storage account connection. You can do this when you need to use a different storage account than other
functions in the library. The constructor takes the name of an app setting that contains a storage connection
string. The attribute can be applied at the parameter, method, or class level. The following example shows class
level and method level:
[StorageAccount("ClassLevelStorageAppSetting")]
public static class AzureFunctions
{
[FunctionName("StorageTrigger")]
[StorageAccount("FunctionLevelStorageAppSetting")]
public static void Run( //...
{
...
}
Annotations
The @BlobInput attribute gives you access to the blob that triggered the function. If you use a byte array with
the attribute, set dataType to binary . Refer to the input example for details.
Configuration
The following table explains the binding configuration properties that you set in the function.json file.
name The name of the variable that represents the blob in function
code.
Usage
The usage of the Blob input binding depends on the extension package version, and the C# modality used in
your function app, which can be one of the following:
An in-process class library is a compiled C# function runs in the same process as the Functions runtime.
Choose a version to see usage details for the mode and version.
The following parameter types are extension version-specific and require FileAccess.ReadWrite in your C# class
library:
BlobClient
BlockBlobClient
PageBlobClient
AppendBlobClient
BlobBaseClient
For examples using these types, see the GitHub repository for the extension. Learn more about these new types
are different and how to migrate to them from the Azure.Storage.Blobs Migration Guide.
You can also use the StorageAccountAttribute to specify the storage account to use. You can do this when you
need to use a different storage account than other functions in the library. The constructor takes the name of an
app setting that contains a storage connection string. The attribute can be applied at the parameter, method, or
class level. The following example shows class level and method level:
[StorageAccount("ClassLevelStorageAppSetting")]
public static class AzureFunctions
{
[FunctionName("BlobTrigger")]
[StorageAccount("FunctionLevelStorageAppSetting")]
public static void Run( //...
{
....
}
Connections
The connection property is a reference to environment configuration which specifies how the app should
connect to Azure Blobs. It may specify:
The name of an application setting containing a connection string
The name of a shared prefix for multiple application settings, together defining an identity-based connection.
If the configured value is both an exact match for a single setting and a prefix match for other settings, the exact
match is used.
Connection string
To obtain a connection string, follow the steps shown at Manage storage account access keys. The connection
string must be for a general-purpose storage account, not a Blob storage account.
This connection string should be stored in an application setting with a name matching the value specified by
the connection property of the binding configuration.
If the app setting name begins with "AzureWebJobs", you can specify only the remainder of the name here. For
example, if you set connection to "MyStorage", the Functions runtime looks for an app setting that is named
"AzureWebJobsMyStorage." If you leave connection empty, the Functions runtime uses the default Storage
connection string in the app setting that is named AzureWebJobsStorage .
Identity-based connections
If you are using version 5.x or higher of the extension, instead of using a connection string with a secret, you can
have the app use an Azure Active Directory identity. To do this, you would define settings under a common
prefix which maps to the connection property in the trigger and binding configuration.
If you are setting connection to "AzureWebJobsStorage", see Connecting to host storage with an identity. For all
other connections, the extension requires the following properties:
EN VIRO N M EN T VA RIA B L E
P RO P ERT Y T EM P L AT E DESC RIP T IO N EXA M P L E VA L UE
1
1 <CONNECTION_NAME_PREFIX>__blobServiceUri can be used as an alias. If the connection configuration will be used
by a blob trigger, blobServiceUri must also be accompanied by queueServiceUri . See below.
The serviceUri form cannot be used when the overall connection configuration is to be used across blobs,
queues, and/or tables. The URI itself can only designate the blob service. As an alternative, you can provide a URI
specifically for each service, allowing a single connection to be used. If both versions are provided, the multi-
service form will be used. To configure the connection for multiple services, instead of
<CONNECTION_NAME_PREFIX>__serviceUri , set:
EN VIRO N M EN T VA RIA B L E
P RO P ERT Y T EM P L AT E DESC RIP T IO N EXA M P L E VA L UE
IMPORTANT
Some permissions might be exposed by the target service that are not necessary for all contexts. Where possible, adhere
to the principle of least privilege , granting the identity only required privileges. For example, if the app only needs to
be able to read from a data source, use a role that only has permission to read. It would be inappropriate to assign a role
that also allows writing to that service, as this would be excessive permission for a read operation. Similarly, you would
want to ensure the role assignment is scoped only over the resources that need to be read.
You will need to create a role assignment that provides access to your blob container at runtime. Management
roles like Owner are not sufficient. The following table shows built-in roles that are recommended when using
the Blob Storage extension in normal operation. Your application may require additional permissions based on
the code you write.
B IN DIN G T Y P E EXA M P L E B UILT - IN RO L ES
1 The blob triggerhandles failure across multiple retries by writing poison blobs to a queue on the storage
account specified by the connection.
2 The AzureWebJobsStorage connection is used internally for blobs and queues that enable the trigger. If it is
configured to use an identity-based connection, it will need additional permissions beyond the default
requirement. These are covered by the Storage Blob Data Owner, Storage Queue Data Contributor, and Storage
Account Contributor roles. To learn more, see Connecting to host storage with an identity.
Next steps
Run a function when blob storage data changes
Write blob storage data from a function
Azure Blob storage output binding for Azure
Functions
8/2/2022 • 17 minutes to read • Edit Online
The output binding allows you to modify and delete blob storage data in an Azure Function.
For information on setup and configuration details, see the overview.
Example
A C# function can be created using one of the following C# modes:
In-process class library: compiled C# function that runs in the same process as the Functions runtime.
Isolated process class library: compiled C# function that runs in a process isolated from the runtime. Isolated
process is required to support C# functions running on .NET 5.0.
C# script: used primarily when creating C# functions in the Azure portal.
In-process
Isolated process
C# Script
The following example is a C# function that runs in-process and uses a blob trigger and two output blob
bindings. The function is triggered by the creation of an image blob in the sample-images container. It creates
small and medium size copies of the image blob.
using System.Collections.Generic;
using System.IO;
using Microsoft.Azure.WebJobs;
using SixLabors.ImageSharp;
using SixLabors.ImageSharp.Formats;
using SixLabors.ImageSharp.PixelFormats;
using SixLabors.ImageSharp.Processing;
image.Position = 0;
using (Image<Rgba32> input = Image.Load<Rgba32>(image, out format))
{
ResizeImage(input, imageMedium, ImageSize.Medium, format);
}
}
public static void ResizeImage(Image<Rgba32> input, Stream output, ImageSize size, IImageFormat format)
{
var dimensions = imageDimensionsTable[size];
@FunctionName("copyBlobQueueTrigger")
@StorageAccount("Storage_Account_Connection_String")
@BlobOutput(
name = "target",
path = "myblob/{queueTrigger}-Copy")
public String copyBlobQueue(
@QueueTrigger(
name = "filename",
dataType = "string",
queueName = "myqueue-items")
String filename,
@BlobInput(
name = "file",
path = "samples-workitems/{queueTrigger}")
String content,
final ExecutionContext context) {
context.getLogger().info("The content of \"" + filename + "\" is: " + content);
return content;
}
In the Java functions runtime library, use the @BlobOutput annotation on function parameters whose value
would be written to an object in blob storage. The parameter type should be OutputBinding<T> , where T is any
native Java type or a POJO.
The following example shows blob input and output bindings in a function.json file and JavaScript code that
uses the bindings. The function makes a copy of a blob. The function is triggered by a queue message that
contains the name of the blob to copy. The new blob is named {originalblobname}-Copy.
In the function.json file, the queueTrigger metadata property is used to specify the blob name in the path
properties:
{
"bindings": [
{
"queueName": "myqueue-items",
"connection": "MyStorageConnectionAppSetting",
"name": "myQueueItem",
"type": "queueTrigger",
"direction": "in"
},
{
"name": "myInputBlob",
"type": "blob",
"path": "samples-workitems/{queueTrigger}",
"connection": "MyStorageConnectionAppSetting",
"direction": "in"
},
{
"name": "myOutputBlob",
"type": "blob",
"path": "samples-workitems/{queueTrigger}-Copy",
"connection": "MyStorageConnectionAppSetting",
"direction": "out"
}
],
"disabled": false
}
The following example demonstrates how to create a copy of an incoming blob as the output from a PowerShell
function.
In the function's configuration file (function.json), the trigger metadata property is used to specify the output
blob name in the path properties.
NOTE
To avoid infinite loops, make sure your input and output paths are different.
{
"bindings": [
{
"name": "myInputBlob",
"path": "data/{trigger}",
"connection": "MyStorageConnectionAppSetting",
"direction": "in",
"type": "blobTrigger"
},
{
"name": "myOutputBlob",
"type": "blob",
"path": "data/copy/{trigger}",
"connection": "MyStorageConnectionAppSetting",
"direction": "out"
}
],
"disabled": false
}
The following example shows blob input and output bindings in a function.json file and Python code that uses
the bindings. The function makes a copy of a blob. The function is triggered by a queue message that contains
the name of the blob to copy. The new blob is named {originalblobname}-Copy.
In the function.json file, the queueTrigger metadata property is used to specify the blob name in the path
properties:
{
"bindings": [
{
"queueName": "myqueue-items",
"connection": "MyStorageConnectionAppSetting",
"name": "queuemsg",
"type": "queueTrigger",
"direction": "in"
},
{
"name": "inputblob",
"type": "blob",
"dataType": "binary",
"path": "samples-workitems/{queueTrigger}",
"connection": "MyStorageConnectionAppSetting",
"direction": "in"
},
{
"name": "outputblob",
"type": "blob",
"dataType": "binary",
"path": "samples-workitems/{queueTrigger}-Copy",
"connection": "MyStorageConnectionAppSetting",
"direction": "out"
}
],
"disabled": false,
"scriptFile": "__init__.py"
}
import logging
import azure.functions as func
Attributes
Both in-process and isolated process C# libraries use attribute to define the function. C# script instead uses a
function.json configuration file.
In-process
Isolated process
C# script
PA RA M ET ER DESC RIP T IO N
The following example sets the path to the blob and a FileAccess parameter indicating write for an output
binding:
[FunctionName("ResizeImage")]
public static void Run(
[BlobTrigger("sample-images/{name}")] Stream image,
[Blob("sample-images-md/{name}", FileAccess.Write)] Stream imageSmall)
{
...
}
While the attribute takes a Connection property, you can also use the StorageAccountAttribute to specify a
storage account connection. You can do this when you need to use a different storage account than other
functions in the library. The constructor takes the name of an app setting that contains a storage connection
string. The attribute can be applied at the parameter, method, or class level. The following example shows class
level and method level:
[StorageAccount("ClassLevelStorageAppSetting")]
public static class AzureFunctions
{
[FunctionName("StorageTrigger")]
[StorageAccount("FunctionLevelStorageAppSetting")]
public static void Run( //...
{
...
}
Annotations
The @BlobOutput attribute gives you access to the blob that triggered the function. If you use a byte array with
the attribute, set dataType to binary . Refer to the output example for details.
Configuration
The following table explains the binding configuration properties that you set in the function.json file.
F UN C T IO N . JSO N P RO P ERT Y AT T RIB UT E P RO P ERT Y DESC RIP T IO N
Usage
The usage of the Blob output binding depends on the extension package version, and the C# modality used in
your function app, which can be one of the following:
In-process class library
Isolated process
C# script
An in-process class library is a compiled C# function runs in the same process as the Functions runtime.
Choose a version to see usage details for the mode and version.
The following parameter types are extension version-specific and require FileAccess.ReadWrite in your C# class
library:
BlobClient
BlockBlobClient
PageBlobClient
AppendBlobClient
BlobBaseClient
For examples using these types, see the GitHub repository for the extension. Learn more about these new types
are different and how to migrate to them from the Azure.Storage.Blobs Migration Guide.
You can also use the StorageAccountAttribute to specify the storage account to use. You can do this when you
need to use a different storage account than other functions in the library. The constructor takes the name of an
app setting that contains a storage connection string. The attribute can be applied at the parameter, method, or
class level. The following example shows class level and method level:
[StorageAccount("ClassLevelStorageAppSetting")]
public static class AzureFunctions
{
[FunctionName("BlobTrigger")]
[StorageAccount("FunctionLevelStorageAppSetting")]
public static void Run( //...
{
....
}
Connections
The connection property is a reference to environment configuration which specifies how the app should
connect to Azure Blobs. It may specify:
The name of an application setting containing a connection string
The name of a shared prefix for multiple application settings, together defining an identity-based connection.
If the configured value is both an exact match for a single setting and a prefix match for other settings, the exact
match is used.
Connection string
To obtain a connection string, follow the steps shown at Manage storage account access keys. The connection
string must be for a general-purpose storage account, not a Blob storage account.
This connection string should be stored in an application setting with a name matching the value specified by
the connection property of the binding configuration.
If the app setting name begins with "AzureWebJobs", you can specify only the remainder of the name here. For
example, if you set connection to "MyStorage", the Functions runtime looks for an app setting that is named
"AzureWebJobsMyStorage." If you leave connection empty, the Functions runtime uses the default Storage
connection string in the app setting that is named AzureWebJobsStorage .
Identity-based connections
If you are using version 5.x or higher of the extension, instead of using a connection string with a secret, you can
have the app use an Azure Active Directory identity. To do this, you would define settings under a common
prefix which maps to the connection property in the trigger and binding configuration.
If you are setting connection to "AzureWebJobsStorage", see Connecting to host storage with an identity. For all
other connections, the extension requires the following properties:
EN VIRO N M EN T VA RIA B L E
P RO P ERT Y T EM P L AT E DESC RIP T IO N EXA M P L E VA L UE
EN VIRO N M EN T VA RIA B L E
P RO P ERT Y T EM P L AT E DESC RIP T IO N EXA M P L E VA L UE
IMPORTANT
Some permissions might be exposed by the target service that are not necessary for all contexts. Where possible, adhere
to the principle of least privilege , granting the identity only required privileges. For example, if the app only needs to
be able to read from a data source, use a role that only has permission to read. It would be inappropriate to assign a role
that also allows writing to that service, as this would be excessive permission for a read operation. Similarly, you would
want to ensure the role assignment is scoped only over the resources that need to be read.
You will need to create a role assignment that provides access to your blob container at runtime. Management
roles like Owner are not sufficient. The following table shows built-in roles that are recommended when using
the Blob Storage extension in normal operation. Your application may require additional permissions based on
the code you write.
1 The blob triggerhandles failure across multiple retries by writing poison blobs to a queue on the storage
account specified by the connection.
2 The AzureWebJobsStorage connection is used internally for blobs and queues that enable the trigger. If it is
configured to use an identity-based connection, it will need additional permissions beyond the default
requirement. These are covered by the Storage Blob Data Owner, Storage Queue Data Contributor, and Storage
Account Contributor roles. To learn more, see Connecting to host storage with an identity.
Next steps
Run a function when blob storage data changes
Read blob storage data when a function runs
Azure Cosmos DB bindings for Azure Functions 1.x
8/2/2022 • 25 minutes to read • Edit Online
This article explains how to work with Azure Cosmos DB bindings in Azure Functions. Azure Functions supports
trigger, input, and output bindings for Azure Cosmos DB.
NOTE
This article is for Azure Functions 1.x. For information about how to use these bindings in Functions 2.x and higher, see
Azure Cosmos DB bindings for Azure Functions 2.x.
This binding was originally named DocumentDB. In Functions version 1.x, only the trigger was renamed Cosmos DB; the
input binding, output binding, and NuGet package retain the DocumentDB name.
NOTE
Azure Cosmos DB bindings are only supported for use with the SQL API. For all other Azure Cosmos DB APIs, you should
access the database from your function by using the static client for your API, including Azure Cosmos DB's API for
MongoDB, Cassandra API, Gremlin API, and Table API.
TO A DD SUP P O RT IN
DEVELO P M EN T EN VIRO N M EN T F UN C T IO N S 1. X
Trigger
The Azure Cosmos DB Trigger uses the Azure Cosmos DB Change Feed to listen for inserts and updates across
partitions. The change feed publishes inserts and updates, not deletions.
Trigger - example
C#
C# Script
JavaScript
The following example shows a C# function that is invoked when there are inserts or updates in the specified
database and collection.
using Microsoft.Azure.Documents;
using Microsoft.Azure.WebJobs;
using Microsoft.Azure.WebJobs.Host;
using System.Collections.Generic;
namespace CosmosDBSamplesV1
{
public static class CosmosTrigger
{
[FunctionName("CosmosTrigger")]
public static void Run([CosmosDBTrigger(
databaseName: "ToDoItems",
collectionName: "Items",
ConnectionStringSetting = "CosmosDBConnection",
LeaseCollectionName = "leases",
CreateLeaseCollectionIfNotExists = true)]IReadOnlyList<Document> documents,
TraceWriter log)
{
if (documents != null && documents.Count > 0)
{
log.Info($"Documents modified: {documents.Count}");
log.Info($"First document Id: {documents[0].Id}");
}
}
}
}
Trigger - attributes
C#
C# Script
JavaScript
[FunctionName("DocumentUpdates")]
public static void Run(
[CosmosDBTrigger("database", "collection", ConnectionStringSetting = "myCosmosDB")]
IReadOnlyList<Document> documents,
TraceWriter log)
{
...
}
Trigger - configuration
The following table explains the binding configuration properties that you set in the function.json file and the
CosmosDBTrigger attribute.
F UN C T IO N . JSO N P RO P ERT Y AT T RIB UT E P RO P ERT Y DESC RIP T IO N
star tFromBeginning Star tFromBeginning (Optional) When set, it tells the Trigger
to start reading changes from the
beginning of the history of the
collection instead of the current time.
This only works the first time the
Trigger starts, as in subsequent runs,
the checkpoints are already stored.
Setting this to true when there are
leases already created has no effect.
When you're developing locally, add your application settings in the local.settings.json file in the Values
collection.
Trigger - usage
The trigger requires a second collection that it uses to store leases over the partitions. Both the collection being
monitored and the collection that contains the leases must be available for the trigger to work.
IMPORTANT
If multiple functions are configured to use a Cosmos DB trigger for the same collection, each of the functions should use a
dedicated lease collection or specify a different LeaseCollectionPrefix for each function. Otherwise, only one of the
functions will be triggered. For information about the prefix, see the Configuration section.
The trigger doesn't indicate whether a document was updated or inserted, it just provides the document itself. If
you need to handle updates and inserts differently, you could do that by implementing timestamp fields for
insertion or update.
Input
The Azure Cosmos DB input binding uses the SQL API to retrieve one or more Azure Cosmos DB documents and
passes them to the input parameter of the function. The document ID or query parameters can be determined
based on the trigger that invokes the function.
Input - example
C#
C# Script
JavaScript
namespace CosmosDBSamplesV1
{
public class ToDoItemLookup
{
public string ToDoItemId { get; set; }
}
}
using Microsoft.Azure.WebJobs;
using Microsoft.Azure.WebJobs.Host;
namespace CosmosDBSamplesV1
{
public static class DocByIdFromJSON
{
[FunctionName("DocByIdFromJSON")]
public static void Run(
[QueueTrigger("todoqueueforlookup")] ToDoItemLookup toDoItemLookup,
[DocumentDB(
databaseName: "ToDoItems",
collectionName: "Items",
ConnectionStringSetting = "CosmosDBConnection",
Id = "{ToDoItemId}")]ToDoItem toDoItem,
TraceWriter log)
{
log.Info($"C# Queue trigger function processed Id={toDoItemLookup?.ToDoItemId}");
if (toDoItem == null)
{
log.Info($"ToDo item not found");
}
else
{
log.Info($"Found ToDo item, Description={toDoItem.Description}");
}
}
}
}
namespace CosmosDBSamplesV1
{
public static class DocByIdFromQueryString
{
[FunctionName("DocByIdFromQueryString")]
public static HttpResponseMessage Run(
[HttpTrigger(AuthorizationLevel.Anonymous, "get", "post", Route = null)]HttpRequestMessage req,
[DocumentDB(
databaseName: "ToDoItems",
collectionName: "Items",
ConnectionStringSetting = "CosmosDBConnection",
Id = "{Query.id}")] ToDoItem toDoItem,
TraceWriter log)
{
log.Info("C# HTTP trigger function processed a request.");
if (toDoItem == null)
{
log.Info($"ToDo item not found");
}
else
{
log.Info($"Found ToDo item, Description={toDoItem.Description}");
}
return req.CreateResponse(HttpStatusCode.OK);
}
}
}
namespace CosmosDBSamplesV1
{
public static class DocByIdFromRouteData
{
[FunctionName("DocByIdFromRouteData")]
public static HttpResponseMessage Run(
[HttpTrigger(
AuthorizationLevel.Anonymous, "get", "post",
Route = "todoitems/{id}")]HttpRequestMessage req,
[DocumentDB(
databaseName: "ToDoItems",
collectionName: "Items",
ConnectionStringSetting = "CosmosDBConnection",
Id = "{id}")] ToDoItem toDoItem,
TraceWriter log)
{
log.Info("C# HTTP trigger function processed a request.");
if (toDoItem == null)
{
log.Info($"ToDo item not found");
}
else
{
log.Info($"Found ToDo item, Description={toDoItem.Description}");
}
return req.CreateResponse(HttpStatusCode.OK);
}
}
}
namespace CosmosDBSamplesV1
{
public static class DocByIdFromRouteDataUsingSqlQuery
{
[FunctionName("DocByIdFromRouteDataUsingSqlQuery")]
public static HttpResponseMessage Run(
[HttpTrigger(AuthorizationLevel.Anonymous, "get", "post",
Route = "todoitems2/{id}")]HttpRequestMessage req,
[DocumentDB(
databaseName: "ToDoItems",
collectionName: "Items",
ConnectionStringSetting = "CosmosDBConnection",
SqlQuery = "select * from ToDoItems r where r.id = {id}")] IEnumerable<ToDoItem> toDoItems,
TraceWriter log)
{
log.Info("C# HTTP trigger function processed a request.");
foreach (ToDoItem toDoItem in toDoItems)
{
log.Info(toDoItem.Description);
}
return req.CreateResponse(HttpStatusCode.OK);
}
}
}
namespace CosmosDBSamplesV1
{
public static class DocsBySqlQuery
{
[FunctionName("DocsBySqlQuery")]
public static HttpResponseMessage Run(
[HttpTrigger(AuthorizationLevel.Anonymous, "get", "post", Route = null)]
HttpRequestMessage req,
[DocumentDB(
databaseName: "ToDoItems",
collectionName: "Items",
ConnectionStringSetting = "CosmosDBConnection",
SqlQuery = "SELECT top 2 * FROM c order by c._ts desc")]
IEnumerable<ToDoItem> toDoItems,
TraceWriter log)
{
log.Info("C# HTTP trigger function processed a request.");
foreach (ToDoItem toDoItem in toDoItems)
{
log.Info(toDoItem.Description);
}
return req.CreateResponse(HttpStatusCode.OK);
}
}
}
namespace CosmosDBSamplesV1
{
public static class DocsByUsingDocumentClient
{
[FunctionName("DocsByUsingDocumentClient")]
public static async Task<HttpResponseMessage> Run(
[HttpTrigger(AuthorizationLevel.Anonymous, "get", "post", Route = null)]HttpRequestMessage req,
[DocumentDB(
databaseName: "ToDoItems",
collectionName: "Items",
ConnectionStringSetting = "CosmosDBConnection")] DocumentClient client,
TraceWriter log)
{
log.Info("C# HTTP trigger function processed a request.");
if (searchterm == null)
{
return req.CreateResponse(HttpStatusCode.NotFound);
}
while (query.HasMoreResults)
{
foreach (ToDoItem result in await query.ExecuteNextAsync())
{
log.Info(result.Description);
}
}
return req.CreateResponse(HttpStatusCode.OK);
}
}
}
Input - attributes
C#
C# Script
JavaScript
par titionKey Par titionKey Specifies the partition key value for the
lookup. May include binding
parameters.
When you're developing locally, add your application settings in the local.settings.json file in the Values
collection.
Input - usage
C#
C# Script
JavaScript
When the function exits successfully, any changes made to the input document via named input parameters are
automatically persisted.
Output
The Azure Cosmos DB output binding lets you write a new document to an Azure Cosmos DB database using
the SQL API.
Output - example
C#
C# Script
JavaScript
namespace CosmosDBSamplesV1
{
public class ToDoItem
{
public string Id { get; set; }
public string Description { get; set; }
}
}
using Microsoft.Azure.WebJobs;
using Microsoft.Azure.WebJobs.Host;
using System;
namespace CosmosDBSamplesV1
{
public static class WriteOneDoc
{
[FunctionName("WriteOneDoc")]
public static void Run(
[QueueTrigger("todoqueueforwrite")] string queueMessage,
[DocumentDB(
databaseName: "ToDoItems",
collectionName: "Items",
ConnectionStringSetting = "CosmosDBConnection")]out dynamic document,
TraceWriter log)
{
document = new { Description = queueMessage, id = Guid.NewGuid() };
namespace CosmosDBSamplesV1
{
public static class WriteDocsIAsyncCollector
{
[FunctionName("WriteDocsIAsyncCollector")]
public static async Task Run(
[QueueTrigger("todoqueueforwritemulti")] ToDoItem[] toDoItemsIn,
[DocumentDB(
databaseName: "ToDoItems",
collectionName: "Items",
ConnectionStringSetting = "CosmosDBConnection")]
IAsyncCollector<ToDoItem> toDoItemsOut,
TraceWriter log)
{
log.Info($"C# Queue trigger function processed {toDoItemsIn?.Length} items");
Output - attributes
C#
C# Script
JavaScript
[FunctionName("QueueToDocDB")]
public static void Run(
[QueueTrigger("myqueue-items", Connection = "AzureWebJobsStorage")] string myQueueItem,
[DocumentDB("ToDoList", "Items", Id = "id", ConnectionStringSetting = "myCosmosDB")] out dynamic
document)
{
...
}
Output - configuration
The following table explains the binding configuration properties that you set in the function.json file and the
DocumentDB attribute.
F UN C T IO N . JSO N P RO P ERT Y AT T RIB UT E P RO P ERT Y DESC RIP T IO N
When you're developing locally, add your application settings in the local.settings.json file in the Values
collection.
Output - usage
By default, when you write to the output parameter in your function, a document is created in your database.
This document has an automatically generated GUID as the document ID. You can specify the document ID of the
output document by specifying the id property in the JSON object passed to the output parameter.
NOTE
When you specify the ID of an existing document, it gets overwritten by the new output document.
This set of articles explains how to work with Azure Cosmos DB bindings in Azure Functions 2.x and higher.
Azure Functions supports trigger, input, and output bindings for Azure Cosmos DB.
A C T IO N TYPE
NOTE
This reference is for Azure Functions version 2.x and higher. For information about how to use these bindings in Functions
1.x, see Azure Cosmos DB bindings for Azure Functions 1.x.
This binding was originally named DocumentDB. In Functions version 2.x and higher, the trigger, bindings, and package
are all named Cosmos DB.
Supported APIs
Azure Cosmos DB bindings are only supported for use with the SQL API. Support for Table API is provided by
using the Table storage bindings, starting with extension 5.x. For all other Azure Cosmos DB APIs, you should
access the database from your function by using the static client for your API, including Azure Cosmos DB's API
for MongoDB, Cassandra API, and Gremlin API.
Install extension
The extension NuGet package you install depends on the C# mode you're using in your function app:
In-process
Isolated process
C# script
Functions execute in the same process as the Functions host. To learn more, see Develop C# class library
functions using Azure Functions.
The process for installing the extension varies depending on the extension version:
Functions 2.x+
Extension 4.x+ (preview)
Working with the trigger and bindings requires that you reference the appropriate NuGet package. Install the
NuGet package, version 3.x.
Install bundle
The Cosmos DB is part of an extension bundle, which is specified in your host.json project file. You may need to
modify this bundle to change the version of the binding, or if bundles aren't already installed. To learn more, see
extension bundle.
Bundle v2.x and v3.x
Bundle v4.x (Preview)
You can install this version of the extension in your function app by registering the extension bundle, version 2.x
or 3.x.
{
"version": "2.0",
"extensionBundle": {
"id": "Microsoft.Azure.Functions.ExtensionBundle",
"version": "[3.3.0, 4.0.0)"
}
}
Because of schema changes in the Azure Cosmos DB SDK, version 4.x of the Azure Cosmos DB extension isn't
currently supported for Java functions.
You can add this version of the extension from the preview extension bundle v4 by adding or replacing the
following code in your host.json file:
{
"version": "2.0",
"extensionBundle": {
"id": "Microsoft.Azure.Functions.ExtensionBundle.Preview",
"version": "[4.0.0, 5.0.0)"
}
}
host.json settings
This section describes the configuration settings available for this binding in versions 2.x and higher. Settings in
the host.json file apply to all functions in a function app instance. The example host.json file below contains only
the version 2.x+ settings for this binding. For more information about function app configuration settings in
versions 2.x and later versions, see host.json reference for Azure Functions.
Functions 2.x+
Extension 4.x+ (preview)
{
"version": "2.0",
"extensions": {
"cosmosDB": {
"connectionMode": "Gateway",
"protocol": "Https",
"leaseOptions": {
"leasePrefix": "prefix1"
}
}
}
}
Next steps
Run a function when an Azure Cosmos DB document is created or modified (Trigger)
Read an Azure Cosmos DB document (Input binding)
Save changes to an Azure Cosmos DB document (Output binding)
Azure Cosmos DB trigger for Azure Functions 2.x
and higher
8/2/2022 • 36 minutes to read • Edit Online
The Azure Cosmos DB Trigger uses the Azure Cosmos DB Change Feed to listen for inserts and updates across
partitions. The change feed publishes inserts and updates, not deletions.
For information on setup and configuration details, see the overview.
Example
The usage of the trigger depends on the extension package version and the C# modality used in your function
app, which can be one of the following:
In-process
Isolated process
C# script
An in-process class library is a compiled C# function runs in the same process as the Functions runtime.
The following examples depend on the extension version for the given C# mode.
Functions 2.x+
Extension 4.x+ (preview)
The following example shows a C# function that is invoked when there are inserts or updates in the specified
database and collection.
using Microsoft.Azure.Documents;
using Microsoft.Azure.WebJobs;
using Microsoft.Azure.WebJobs.Host;
using System.Collections.Generic;
using Microsoft.Extensions.Logging;
namespace CosmosDBSamplesV2
{
public static class CosmosTrigger
{
[FunctionName("CosmosTrigger")]
public static void Run([CosmosDBTrigger(
databaseName: "ToDoItems",
collectionName: "Items",
ConnectionStringSetting = "CosmosDBConnection",
LeaseCollectionName = "leases",
CreateLeaseCollectionIfNotExists = true)]IReadOnlyList<Document> documents,
ILogger log)
{
if (documents != null && documents.Count > 0)
{
log.LogInformation($"Documents modified: {documents.Count}");
log.LogInformation($"First document Id: {documents[0].Id}");
}
}
}
}
This function is invoked when there are inserts or updates in the specified database and collection.
Functions 2.x+
Extension 4.x+ (preview)
@FunctionName("cosmosDBMonitor")
public void cosmosDbProcessor(
@CosmosDBTrigger(name = "items",
databaseName = "ToDoList",
collectionName = "Items",
leaseCollectionName = "leases",
createLeaseCollectionIfNotExists = true,
connectionStringSetting = "AzureCosmosDBConnection") String[] items,
final ExecutionContext context ) {
context.getLogger().info(items.length + "item(s) is/are changed.");
}
In the Java functions runtime library, use the @CosmosDBTrigger annotation on parameters whose value would
come from Cosmos DB. This annotation can be used with native Java types, POJOs, or nullable values using
Optional<T> .
The following example shows a Cosmos DB trigger binding in a function.json file and a JavaScript function that
uses the binding. The function writes log messages when Cosmos DB records are added or modified.
Here's the binding data in the function.json file:
Functions 2.x+
Functions 4.x+ (preview)
{
"type": "cosmosDBTrigger",
"name": "documents",
"direction": "in",
"leaseCollectionName": "leases",
"connectionStringSetting": "<connection-app-setting>",
"databaseName": "Tasks",
"collectionName": "Items",
"createLeaseCollectionIfNotExists": true
}
Note that some of the binding attribute names changed in version 4.x of the Azure Cosmos DB extension.
Here's the JavaScript code:
The following example shows how to run a function as data changes in Cosmos DB.
Functions 2.x+
Functions 4.x+ (preview)
{
"type": "cosmosDBTrigger",
"name": "documents",
"direction": "in",
"leaseCollectionName": "leases",
"connectionStringSetting": "<connection-app-setting>",
"databaseName": "Tasks",
"collectionName": "Items",
"createLeaseCollectionIfNotExists": true
}
Note that some of the binding attribute names changed in version 4.x of the Azure Cosmos DB extension.
In the run.ps1 file, you have access to the document that triggers the function via the $Documents parameter.
param($Documents, $TriggerMetadata)
The following example shows a Cosmos DB trigger binding in a function.json file and a Python function that
uses the binding. The function writes log messages when Cosmos DB records are modified.
Here's the binding data in the function.json file:
Functions 2.x+
Functions 4.x+ (preview)
{
"type": "cosmosDBTrigger",
"name": "documents",
"direction": "in",
"leaseCollectionName": "leases",
"connectionStringSetting": "<connection-app-setting>",
"databaseName": "Tasks",
"collectionName": "Items",
"createLeaseCollectionIfNotExists": true
}
Note that some of the binding attribute names changed in version 4.x of the Azure Cosmos DB extension.
Here's the Python code:
import logging
import azure.functions as func
Attributes
Both in-process and isolated process C# libraries use the CosmosDBTriggerAttribute to define the function. C#
script instead uses a function.json configuration file.
Functions 2.x+
Extension 4.x+ (preview)
AT T RIB UT E P RO P ERT Y DESC RIP T IO N
FeedPollDelay (Optional) The time (in milliseconds) for the delay between
polling a partition for new changes on the feed, after all
current changes are drained. Default is 5,000 milliseconds, or
5 seconds.
Star tFromBeginning (Optional) This option tells the Trigger to read changes from
the beginning of the collection's change history instead of
starting at the current time. Reading from the beginning
only works the first time the trigger starts, as in subsequent
runs, the checkpoints are already stored. Setting this option
to true when there are leases already created has no
effect.
Annotations
Functions 2.x+
Extension 4.x+ (preview)
From the Java functions runtime library, use the @CosmosDBInput annotation on parameters that read data from
Cosmos DB. The annotation supports the following properties:
name
connectionStringSetting
databaseName
collectionName
leaseConnectionStringSetting
leaseDatabaseName
leaseCollectionName
createLeaseCollectionIfNotExists
leasesCollectionThroughput
leaseCollectionPrefix
feedPollDelay
leaseAcquireInterval
leaseExpirationInterval
leaseRenewInterval
checkpointInterval
checkpointDocumentCount
maxItemsPerInvocation
startFromBeginning
preferredLocations
Configuration
The following table explains the binding configuration properties that you set in the function.json file, where
properties differ by extension version:
Functions 2.x+
Extension 4.x+ (preview)
name The variable name used in function code that represents the
list of documents with changes.
feedPollDelay (Optional) The time (in milliseconds) for the delay between
polling a partition for new changes on the feed, after all
current changes are drained. Default is 5,000 milliseconds, or
5 seconds.
star tFromBeginning (Optional) This option tells the Trigger to read changes from
the beginning of the collection's change history instead of
starting at the current time. Reading from the beginning
only works the first time the trigger starts, as in subsequent
runs, the checkpoints are already stored. Setting this option
to true when there are leases already created has no
effect.
Usage
The parameter type supported by the Azure Cosmos DB trigger depends on the Functions runtime version, the
extension package version, and the C# modality used.
The trigger requires a second collection that it uses to store leases over the partitions. Both the collection being
monitored and the collection that contains the leases must be available for the trigger to work.
IMPORTANT
If multiple functions are configured to use a Cosmos DB trigger for the same collection, each of the functions should use a
dedicated lease collection or specify a different LeaseCollectionPrefix for each function. Otherwise, only one of the
functions will be triggered. For information about the prefix, see the Configuration section.
IMPORTANT
If multiple functions are configured to use a Cosmos DB trigger for the same collection, each of the functions should use a
dedicated lease collection or specify a different leaseCollectionPrefix for each function. Otherwise, only one of the
functions will be triggered. For information about the prefix, see the Configuration section.
The trigger doesn't indicate whether a document was updated or inserted, it just provides the document itself. If
you need to handle updates and inserts differently, you could do that by implementing timestamp fields for
insertion or update.
Connections
The connectionStringSetting / connection and leaseConnectionStringSetting / leaseConnection properties are
references to environment configuration which specifies how the app should connect to Azure Cosmos DB. They
may specify:
The name of an application setting containing a connection string
The name of a shared prefix for multiple application settings, together defining an identity-based connection.
This option is only available for the connection and leaseConnection versions from version 4.x or higher of
the extension.
If the configured value is both an exact match for a single setting and a prefix match for other settings, the exact
match is used.
Connection string
The connection string for your database account should be stored in an application setting with a name
matching the value specified by the connection property of the binding configuration.
Identity-based connections
If you are using version 4.x or higher of the extension, instead of using a connection string with a secret, you can
have the app use an Azure Active Directory identity. To do this, you would define settings under a common
prefix which maps to the connection property in the trigger and binding configuration.
In this mode, the extension requires the following properties:
EN VIRO N M EN T VA RIA B L E
P RO P ERT Y T EM P L AT E DESC RIP T IO N EXA M P L E VA L UE
Additional properties may be set to customize the connection. See Common properties for identity-based
connections.
When hosted in the Azure Functions service, identity-based connections use a managed identity. The system-
assigned identity is used by default, although a user-assigned identity can be specified with the credential and
clientID properties. Note that configuring a user-assigned identity with a resource ID is not supported. When
run in other contexts, such as local development, your developer identity is used instead, although this can be
customized. See Local development with identity-based connections.
Grant permission to the identity
Whatever identity is being used must have permissions to perform the intended actions. You will need to assign
a role in Azure RBAC, using either built-in or custom roles which provide those permissions.
IMPORTANT
Some permissions might be exposed by the target service that are not necessary for all contexts. Where possible, adhere
to the principle of least privilege , granting the identity only required privileges. For example, if the app only needs to
be able to read from a data source, use a role that only has permission to read. It would be inappropriate to assign a role
that also allows writing to that service, as this would be excessive permission for a read operation. Similarly, you would
want to ensure the role assignment is scoped only over the resources that need to be read.
You will need to create a role assignment that provides access to your database account at runtime.
Management roles like Owner are not sufficient. The following table shows built-in roles that are recommended
when using the Cosmos DB extension in normal operation. Your application may require additional permissions
based on the code you write.
B IN DIN G T Y P E EXA M P L E B UILT - IN RO L ES
Next steps
Read an Azure Cosmos DB document (Input binding)
Save changes to an Azure Cosmos DB document (Output binding)
Azure Cosmos DB input binding for Azure
Functions 2.x and higher
8/2/2022 • 41 minutes to read • Edit Online
The Azure Cosmos DB input binding uses the SQL API to retrieve one or more Azure Cosmos DB documents and
passes them to the input parameter of the function. The document ID or query parameters can be determined
based on the trigger that invokes the function.
For information on setup and configuration details, see the overview.
NOTE
When the collection is partitioned, lookup operations must also specify the partition key value.
Example
Unless otherwise noted, examples in this article target version 3.x of the Azure Cosmos DB extension. For use
with extension version 4.x, you need to replace the string collection in property and attribute names with
container .
In-process
Isolated process
C# Script
This section contains the following examples for using in-process C# class library functions with extension
version 3.x:
Queue trigger, look up ID from JSON
HTTP trigger, look up ID from query string
HTTP trigger, look up ID from route data
HTTP trigger, look up ID from route data, using SqlQuery
HTTP trigger, get multiple docs, using SqlQuery
HTTP trigger, get multiple docs, using DocumentClient
HTTP trigger, get multiple docs, using CosmosClient (v4 extension)
The examples refer to a simple ToDoItem type:
namespace CosmosDBSamplesV2
{
public class ToDoItem
{
[JsonProperty("id")]
public string Id { get; set; }
[JsonProperty("partitionKey")]
public string PartitionKey { get; set; }
namespace CosmosDBSamplesV2
{
public class ToDoItemLookup
{
public string ToDoItemId { get; set; }
namespace CosmosDBSamplesV2
{
public static class DocByIdFromJSON
{
[FunctionName("DocByIdFromJSON")]
public static void Run(
[QueueTrigger("todoqueueforlookup")] ToDoItemLookup toDoItemLookup,
[CosmosDB(
databaseName: "ToDoItems",
collectionName: "Items",
ConnectionStringSetting = "CosmosDBConnection",
Id = "{ToDoItemId}",
PartitionKey = "{ToDoItemPartitionKeyValue}")]ToDoItem toDoItem,
ILogger log)
{
log.LogInformation($"C# Queue trigger function processed Id={toDoItemLookup?.ToDoItemId} Key=
{toDoItemLookup?.ToDoItemPartitionKeyValue}");
if (toDoItem == null)
{
log.LogInformation($"ToDo item not found");
}
else
{
log.LogInformation($"Found ToDo item, Description={toDoItem.Description}");
}
}
}
}
NOTE
The HTTP query string parameter is case-sensitive.
using Microsoft.AspNetCore.Http;
using Microsoft.AspNetCore.Mvc;
using Microsoft.Azure.WebJobs;
using Microsoft.Azure.WebJobs.Extensions.Http;
using Microsoft.Azure.WebJobs.Host;
using Microsoft.Extensions.Logging;
namespace CosmosDBSamplesV2
{
public static class DocByIdFromQueryString
{
[FunctionName("DocByIdFromQueryString")]
public static IActionResult Run(
[HttpTrigger(AuthorizationLevel.Anonymous, "get", "post", Route = null)]
HttpRequest req,
[CosmosDB(
databaseName: "ToDoItems",
collectionName: "Items",
ConnectionStringSetting = "CosmosDBConnection",
Id = "{Query.id}",
PartitionKey = "{Query.partitionKey}")] ToDoItem toDoItem,
ILogger log)
{
log.LogInformation("C# HTTP trigger function processed a request.");
if (toDoItem == null)
{
log.LogInformation($"ToDo item not found");
}
else
{
log.LogInformation($"Found ToDo item, Description={toDoItem.Description}");
}
return new OkResult();
}
}
}
namespace CosmosDBSamplesV2
{
public static class DocByIdFromRouteData
{
[FunctionName("DocByIdFromRouteData")]
public static IActionResult Run(
[HttpTrigger(AuthorizationLevel.Anonymous, "get", "post",
Route = "todoitems/{partitionKey}/{id}")]HttpRequest req,
[CosmosDB(
databaseName: "ToDoItems",
collectionName: "Items",
ConnectionStringSetting = "CosmosDBConnection",
Id = "{id}",
PartitionKey = "{partitionKey}")] ToDoItem toDoItem,
ILogger log)
{
log.LogInformation("C# HTTP trigger function processed a request.");
if (toDoItem == null)
{
log.LogInformation($"ToDo item not found");
}
else
{
log.LogInformation($"Found ToDo item, Description={toDoItem.Description}");
}
return new OkResult();
}
}
}
NOTE
If you need to query by just the ID, it is recommended to use a look up, like the previous examples, as it will consume less
request units. Point read operations (GET) are more efficient than queries by ID.
using Microsoft.AspNetCore.Http;
using Microsoft.AspNetCore.Mvc;
using Microsoft.Azure.WebJobs;
using Microsoft.Azure.WebJobs.Extensions.Http;
using Microsoft.Azure.WebJobs.Host;
using System.Collections.Generic;
using Microsoft.Extensions.Logging;
namespace CosmosDBSamplesV2
{
public static class DocByIdFromRouteDataUsingSqlQuery
{
[FunctionName("DocByIdFromRouteDataUsingSqlQuery")]
public static IActionResult Run(
[HttpTrigger(AuthorizationLevel.Anonymous, "get", "post",
Route = "todoitems2/{id}")]HttpRequest req,
[CosmosDB("ToDoItems", "Items",
ConnectionStringSetting = "CosmosDBConnection",
SqlQuery = "select * from ToDoItems r where r.id = {id}")]
IEnumerable<ToDoItem> toDoItems,
ILogger log)
{
log.LogInformation("C# HTTP trigger function processed a request.");
namespace CosmosDBSamplesV2
{
public static class DocsBySqlQuery
{
[FunctionName("DocsBySqlQuery")]
public static IActionResult Run(
[HttpTrigger(AuthorizationLevel.Anonymous, "get", "post", Route = null)]
HttpRequest req,
[CosmosDB(
databaseName: "ToDoItems",
collectionName: "Items",
ConnectionStringSetting = "CosmosDBConnection",
SqlQuery = "SELECT top 2 * FROM c order by c._ts desc")]
IEnumerable<ToDoItem> toDoItems,
ILogger log)
{
log.LogInformation("C# HTTP trigger function processed a request.");
foreach (ToDoItem toDoItem in toDoItems)
{
log.LogInformation(toDoItem.Description);
}
return new OkResult();
}
}
}
NOTE
You can also use the IDocumentClient interface to make testing easier.
using Microsoft.AspNetCore.Http;
using Microsoft.AspNetCore.Mvc;
using Microsoft.Azure.Documents.Client;
using Microsoft.Azure.Documents.Linq;
using Microsoft.Azure.WebJobs;
using Microsoft.Azure.WebJobs.Extensions.Http;
using Microsoft.Azure.WebJobs.Host;
using Microsoft.Extensions.Logging;
using System;
using System.Linq;
using System.Threading.Tasks;
namespace CosmosDBSamplesV2
{
public static class DocsByUsingDocumentClient
{
[FunctionName("DocsByUsingDocumentClient")]
public static async Task<IActionResult> Run(
[HttpTrigger(AuthorizationLevel.Anonymous, "get", "post",
Route = null)]HttpRequest req,
[CosmosDB(
databaseName: "ToDoItems",
collectionName: "Items",
ConnectionStringSetting = "CosmosDBConnection")] DocumentClient client,
ILogger log)
{
log.LogInformation("C# HTTP trigger function processed a request.");
while (query.HasMoreResults)
{
foreach (ToDoItem result in await query.ExecuteNextAsync())
{
log.LogInformation(result.Description);
}
}
return new OkResult();
}
}
}
namespace CosmosDBSamplesV2
{
public static class DocsByUsingCosmosClient
{
[FunctionName("DocsByUsingCosmosClient")]
public static async Task<IActionResult> Run(
[HttpTrigger(AuthorizationLevel.Anonymous, "get", "post",
Route = null)]HttpRequest req,
[CosmosDB(
databaseName: "ToDoItems",
containerName: "Items",
Connection = "CosmosDBConnection")] CosmosClient client,
ILogger log)
{
log.LogInformation("C# HTTP trigger function processed a request.");
@Override
public String toString() {
return "ToDoItem={id=" + id + ",description=" + description + "}";
}
}
@FunctionName("DocByIdFromQueryString")
public HttpResponseMessage run(
@HttpTrigger(name = "req",
methods = {HttpMethod.GET, HttpMethod.POST},
authLevel = AuthorizationLevel.ANONYMOUS)
HttpRequestMessage<Optional<String>> request,
@CosmosDBInput(name = "database",
databaseName = "ToDoList",
collectionName = "Items",
id = "{Query.id}",
partitionKey = "{Query.partitionKeyValue}",
connectionStringSetting = "Cosmos_DB_Connection_String")
Optional<String> item,
final ExecutionContext context) {
// Item list
context.getLogger().info("Parameters are: " + request.getQueryParameters());
context.getLogger().info("String from the database is " + (item.isPresent() ? item.get() : null));
In the Java functions runtime library, use the @CosmosDBInput annotation on function parameters whose value
would come from Cosmos DB. This annotation can be used with native Java types, POJOs, or nullable values
using Optional<T> .
@FunctionName("DocByIdFromQueryStringPojo")
public HttpResponseMessage run(
@HttpTrigger(name = "req",
methods = {HttpMethod.GET, HttpMethod.POST},
authLevel = AuthorizationLevel.ANONYMOUS)
HttpRequestMessage<Optional<String>> request,
@CosmosDBInput(name = "database",
databaseName = "ToDoList",
collectionName = "Items",
id = "{Query.id}",
partitionKey = "{Query.partitionKeyValue}",
connectionStringSetting = "Cosmos_DB_Connection_String")
ToDoItem item,
final ExecutionContext context) {
// Item list
context.getLogger().info("Parameters are: " + request.getQueryParameters());
context.getLogger().info("Item from the database is " + item);
@FunctionName("DocByIdFromRoute")
public HttpResponseMessage run(
@HttpTrigger(name = "req",
methods = {HttpMethod.GET, HttpMethod.POST},
authLevel = AuthorizationLevel.ANONYMOUS,
route = "todoitems/{partitionKeyValue}/{id}")
HttpRequestMessage<Optional<String>> request,
@CosmosDBInput(name = "database",
databaseName = "ToDoList",
collectionName = "Items",
id = "{id}",
partitionKey = "{partitionKeyValue}",
connectionStringSetting = "Cosmos_DB_Connection_String")
Optional<String> item,
final ExecutionContext context) {
// Item list
context.getLogger().info("Parameters are: " + request.getQueryParameters());
context.getLogger().info("String from the database is " + (item.isPresent() ? item.get() : null));
NOTE
If you need to query by just the ID, it is recommended to use a look up, like the previous examples, as it will consume less
request units. Point read operations (GET) are more efficient than queries by ID.
public class DocByIdFromRouteSqlQuery {
@FunctionName("DocByIdFromRouteSqlQuery")
public HttpResponseMessage run(
@HttpTrigger(name = "req",
methods = {HttpMethod.GET, HttpMethod.POST},
authLevel = AuthorizationLevel.ANONYMOUS,
route = "todoitems2/{id}")
HttpRequestMessage<Optional<String>> request,
@CosmosDBInput(name = "database",
databaseName = "ToDoList",
collectionName = "Items",
sqlQuery = "select * from Items r where r.id = {id}",
connectionStringSetting = "Cosmos_DB_Connection_String")
ToDoItem[] item,
final ExecutionContext context) {
// Item list
context.getLogger().info("Parameters are: " + request.getQueryParameters());
context.getLogger().info("Items from the database are " + item);
HTTP trigger, get multiple docs from route data, using SqlQuery
The following example shows a Java function that retrieves multiple documents. The function is triggered by an
HTTP request that uses a route parameter desc to specify the string to search for in the description field. The
search term is used to retrieve a collection of documents from the specified database and collection, converting
the result set to a ToDoItem[] and passing it as an argument to the function.
public class DocsFromRouteSqlQuery {
@FunctionName("DocsFromRouteSqlQuery")
public HttpResponseMessage run(
@HttpTrigger(name = "req",
methods = {HttpMethod.GET},
authLevel = AuthorizationLevel.ANONYMOUS,
route = "todoitems3/{desc}")
HttpRequestMessage<Optional<String>> request,
@CosmosDBInput(name = "database",
databaseName = "ToDoList",
collectionName = "Items",
sqlQuery = "select * from Items r where contains(r.description, {desc})",
connectionStringSetting = "Cosmos_DB_Connection_String")
ToDoItem[] items,
final ExecutionContext context) {
// Item list
context.getLogger().info("Parameters are: " + request.getQueryParameters());
context.getLogger().info("Number of items from the database is " + (items == null ? 0 :
items.length));
This section contains the following examples that read a single document by specifying an ID value from various
sources:
Queue trigger, look up ID from JSON
HTTP trigger, look up ID from query string
HTTP trigger, look up ID from route data
Queue trigger, get multiple docs, using SqlQuery
// Change input document contents using Azure Cosmos DB input binding, using
context.bindings.inputDocumentOut
module.exports = async function (context) {
context.bindings.inputDocumentOut = context.bindings.inputDocumentIn;
context.bindings.inputDocumentOut.text = "This was updated!";
};
{
"name": "InputDocumentIn",
"type": "cosmosDB",
"databaseName": "MyDatabase",
"collectionName": "MyCollection",
"id": "{queueTrigger_payload_property}",
"partitionKey": "{queueTrigger_payload_property}",
"connectionStringSetting": "CosmosDBConnection",
"direction": "in"
},
{
"name": "InputDocumentOut",
"type": "cosmosDB",
"databaseName": "MyDatabase",
"collectionName": "MyCollection",
"createIfNotExists": false,
"partitionKey": "{queueTrigger_payload_property}",
"connectionStringSetting": "CosmosDBConnection",
"direction": "out"
}
The run.ps1 file has the PowerShell code which reads the incoming document and outputs changes.
param($QueueItem, $InputDocumentIn, $TriggerMetadata)
$Document = $InputDocumentIn
$Document.text = 'This was updated!'
{
"bindings": [
{
"type": "cosmosDB",
"name": "ToDoItem",
"databaseName": "ToDoItems",
"collectionName": "Items",
"connectionStringSetting": "CosmosDBConnection",
"direction": "in",
"Id": "{Query.id}",
"PartitionKey": "{Query.partitionKeyValue}"
},
{
"authLevel": "anonymous",
"name": "Request",
"type": "httpTrigger",
"direction": "in",
"methods": [
"get",
"post"
]
},
{
"name": "Response",
"type": "http",
"direction": "out"
},
],
"disabled": false
}
The the run.ps1 file has the PowerShell code which reads the incoming document and outputs changes.
using namespace System.Net
if (-not $ToDoItem) {
Write-Host 'ToDo item not found'
} else {
The the run.ps1 file has the PowerShell code which reads the incoming document and outputs changes.
if (-not $ToDoItem) {
Write-Host 'ToDo item not found'
} else {
Write-Host "Found ToDo item, Description=$($ToDoItem.Description)"
The the run1.ps file has the PowerShell code which reads the incoming documents.
This section contains the following examples that read a single document by specifying an ID value from various
sources:
Queue trigger, look up ID from JSON
HTTP trigger, look up ID from query string
HTTP trigger, look up ID from route data
Queue trigger, get multiple docs, using SqlQuery
{
"name": "documents",
"type": "cosmosDB",
"databaseName": "MyDatabase",
"collectionName": "MyCollection",
"id" : "{queueTrigger_payload_property}",
"partitionKey": "{queueTrigger_payload_property}",
"connectionStringSetting": "MyAccount_COSMOSDB",
"direction": "in"
},
{
"name": "$return",
"type": "cosmosDB",
"databaseName": "MyDatabase",
"collectionName": "MyCollection",
"createIfNotExists": false,
"partitionKey": "{queueTrigger_payload_property}",
"connectionStringSetting": "MyAccount_COSMOSDB",
"direction": "out"
}
{
"bindings": [
{
"authLevel": "anonymous",
"name": "req",
"type": "httpTrigger",
"direction": "in",
"methods": [
"get",
"post"
]
},
{
"name": "$return",
"type": "http",
"direction": "out"
},
{
"type": "cosmosDB",
"name": "todoitems",
"databaseName": "ToDoItems",
"collectionName": "Items",
"connectionStringSetting": "CosmosDBConnection",
"direction": "in",
"Id": "{Query.id}",
"PartitionKey": "{Query.partitionKeyValue}"
}
],
"scriptFile": "__init__.py"
}
import logging
import azure.functions as func
return 'OK'
HTTP trigger, look up ID from route data
The following example shows a Python function that retrieves a single document. The function is triggered by an
HTTP request that uses route data to specify the ID and partition key value to look up. That ID and partition key
value are used to retrieve a ToDoItem document from the specified database and collection.
Here's the function.json file:
{
"bindings": [
{
"authLevel": "anonymous",
"name": "req",
"type": "httpTrigger",
"direction": "in",
"methods": [
"get",
"post"
],
"route":"todoitems/{partitionKeyValue}/{id}"
},
{
"name": "$return",
"type": "http",
"direction": "out"
},
{
"type": "cosmosDB",
"name": "todoitems",
"databaseName": "ToDoItems",
"collectionName": "Items",
"connection": "CosmosDBConnection",
"direction": "in",
"Id": "{id}",
"PartitionKey": "{partitionKeyValue}"
}
],
"disabled": false,
"scriptFile": "__init__.py"
}
import logging
import azure.functions as func
{
"name": "documents",
"type": "cosmosDB",
"direction": "in",
"databaseName": "MyDb",
"collectionName": "MyCollection",
"sqlQuery": "SELECT * from c where c.departmentId = {departmentId}",
"connectionStringSetting": "CosmosDBConnection"
}
Attributes
Both in-process and isolated process C# libraries use attributes to define the function. C# script instead uses a
function.json configuration file.
Functions 2.x+
Extension 4.x+ (preview)
Par titionKey Specifies the partition key value for the lookup. May include
binding parameters. It is required for lookups in partitioned
collections.
Annotations
From the Java functions runtime library, use the @CosmosDBInput annotation on parameters that read from
Azure Cosmos DB. The annotation supports the following properties:
name
connectionStringSetting
databaseName
collectionName
dataType
id
partitionKey
sqlQuery
Configuration
The following table explains the binding configuration properties that you set in the function.json file, where
properties differ by extension version:
Functions 2.x+
Extension 4.x+ (preview)
name The variable name used in function code that represents the
list of documents with changes.
par titionKey Specifies the partition key value for the lookup. May include
binding parameters. It is required for lookups in partitioned
collections.
Usage
The parameter type supported by the Event Grid trigger depends on the Functions runtime version, the
extension package version, and the C# modality used.
Functions 2.x+
Extension 4.x+ (preview)
When the function exits successfully, any changes made to the input document are automatically persisted.
From the Java functions runtime library, the @CosmosDBInput annotation exposes Cosmos DB data to the
function. This annotation can be used with native Java types, POJOs, or nullable values using Optional<T> .
Updates are not made automatically upon function exit. Instead, use context.bindings.<documentName>In and
context.bindings.<documentName>Out to make updates. See the JavaScript example for more detail.
Updates to documents are not made automatically upon function exit. To update documents in a function use an
output binding. See the PowerShell example for more detail.
Data is made available to the function via a DocumentList parameter. Changes made to the document are not
automatically persisted.
Connections
The connectionStringSetting / connection and leaseConnectionStringSetting / leaseConnection properties are
references to environment configuration which specifies how the app should connect to Azure Cosmos DB. They
may specify:
The name of an application setting containing a connection string
The name of a shared prefix for multiple application settings, together defining an identity-based connection.
This option is only available for the connection and leaseConnection versions from version 4.x or higher of
the extension.
If the configured value is both an exact match for a single setting and a prefix match for other settings, the exact
match is used.
Connection string
The connection string for your database account should be stored in an application setting with a name
matching the value specified by the connection property of the binding configuration.
Identity-based connections
If you are using version 4.x or higher of the extension, instead of using a connection string with a secret, you can
have the app use an Azure Active Directory identity. To do this, you would define settings under a common
prefix which maps to the connection property in the trigger and binding configuration.
In this mode, the extension requires the following properties:
EN VIRO N M EN T VA RIA B L E
P RO P ERT Y T EM P L AT E DESC RIP T IO N EXA M P L E VA L UE
Additional properties may be set to customize the connection. See Common properties for identity-based
connections.
When hosted in the Azure Functions service, identity-based connections use a managed identity. The system-
assigned identity is used by default, although a user-assigned identity can be specified with the credential and
clientID properties. Note that configuring a user-assigned identity with a resource ID is not supported. When
run in other contexts, such as local development, your developer identity is used instead, although this can be
customized. See Local development with identity-based connections.
Grant permission to the identity
Whatever identity is being used must have permissions to perform the intended actions. You will need to assign
a role in Azure RBAC, using either built-in or custom roles which provide those permissions.
IMPORTANT
Some permissions might be exposed by the target service that are not necessary for all contexts. Where possible, adhere
to the principle of least privilege , granting the identity only required privileges. For example, if the app only needs to
be able to read from a data source, use a role that only has permission to read. It would be inappropriate to assign a role
that also allows writing to that service, as this would be excessive permission for a read operation. Similarly, you would
want to ensure the role assignment is scoped only over the resources that need to be read.
You will need to create a role assignment that provides access to your database account at runtime.
Management roles like Owner are not sufficient. The following table shows built-in roles that are recommended
when using the Cosmos DB extension in normal operation. Your application may require additional permissions
based on the code you write.
Next steps
Run a function when an Azure Cosmos DB document is created or modified (Trigger)
Save changes to an Azure Cosmos DB document (Output binding)
Azure Cosmos DB output binding for Azure
Functions 2.x and higher
8/2/2022 • 20 minutes to read • Edit Online
The Azure Cosmos DB output binding lets you write a new document to an Azure Cosmos DB database using
the SQL API.
For information on setup and configuration details, see the overview.
Example
Unless otherwise noted, examples in this article target version 3.x of the Azure Cosmos DB extension. For use
with extension version 4.x, you need to replace the string collection in property and attribute names with
container .
In-process
Isolated process
C# Script
namespace CosmosDBSamplesV2
{
public class ToDoItem
{
public string id { get; set; }
public string Description { get; set; }
}
}
namespace CosmosDBSamplesV2
{
public static class WriteOneDoc
{
[FunctionName("WriteOneDoc")]
public static void Run(
[QueueTrigger("todoqueueforwrite")] string queueMessage,
[CosmosDB(
databaseName: "ToDoItems",
collectionName: "Items",
ConnectionStringSetting = "CosmosDBConnection")]out dynamic document,
ILogger log)
{
document = new { Description = queueMessage, id = Guid.NewGuid() };
using Microsoft.Azure.WebJobs;
using Microsoft.Azure.WebJobs.Host;
using Microsoft.Extensions.Logging;
using System;
namespace CosmosDBSamplesV2
{
public static class WriteOneDoc
{
[FunctionName("WriteOneDoc")]
public static void Run(
[QueueTrigger("todoqueueforwrite")] string queueMessage,
[CosmosDB(
databaseName: "ToDoItems",
containerName: "Items",
Connection = "CosmosDBConnection")]out dynamic document,
ILogger log)
{
document = new { Description = queueMessage, id = Guid.NewGuid() };
namespace CosmosDBSamplesV2
{
public static class WriteDocsIAsyncCollector
{
[FunctionName("WriteDocsIAsyncCollector")]
public static async Task Run(
[QueueTrigger("todoqueueforwritemulti")] ToDoItem[] toDoItemsIn,
[CosmosDB(
databaseName: "ToDoItems",
collectionName: "Items",
ConnectionStringSetting = "CosmosDBConnection")]
IAsyncCollector<ToDoItem> toDoItemsOut,
ILogger log)
{
log.LogInformation($"C# Queue trigger function processed {toDoItemsIn?.Length} items");
@FunctionName("getItem")
@CosmosDBOutput(name = "database",
databaseName = "ToDoList",
collectionName = "Items",
connectionStringSetting = "AzureCosmosDBConnection")
public String cosmosDbQueryById(
@QueueTrigger(name = "msg",
queueName = "myqueue-items",
connection = "AzureWebJobsStorage")
String message,
final ExecutionContext context) {
return "{ id: \"" + System.currentTimeMillis() + "\", Description: " + message + " }";
}
HTTP trigger, save one document to database via return value
The following example shows a Java function whose signature is annotated with @CosmosDBOutput and has
return value of type String . The JSON document returned by the function will be automatically written to the
corresponding CosmosDB collection.
@FunctionName("WriteOneDoc")
@CosmosDBOutput(name = "database",
databaseName = "ToDoList",
collectionName = "Items",
connectionStringSetting = "Cosmos_DB_Connection_String")
public String run(
@HttpTrigger(name = "req",
methods = {HttpMethod.GET, HttpMethod.POST},
authLevel = AuthorizationLevel.ANONYMOUS)
HttpRequestMessage<Optional<String>> request,
final ExecutionContext context) {
// Item list
context.getLogger().info("Parameters are: " + request.getQueryParameters());
// Generate random ID
final int id = Math.abs(new Random().nextInt());
// Generate document
final String jsonDocument = "{\"id\":\"" + id + "\", " +
"\"description\": \"" + name + "\"}";
return jsonDocument;
}
// Item list
context.getLogger().info("Parameters are: " + request.getQueryParameters());
// Generate random ID
final int id = Math.abs(new Random().nextInt());
// Generate document
final String jsonDocument = "{\"id\":\"" + id + "\", " +
"\"description\": \"" + name + "\"}";
// Item list
context.getLogger().info("Parameters are: " + request.getQueryParameters());
// Generate documents
List<ToDoItem> items = new ArrayList<>();
// Create ToDoItem
ToDoItem item = new ToDoItem(String.valueOf(id), name);
items.add(item);
}
In the Java functions runtime library, use the @CosmosDBOutput annotation on parameters that will be written to
Cosmos DB. The annotation parameter type should be OutputBinding<T> , where T is either a native Java type or
a POJO.
The following example shows an Azure Cosmos DB output binding in a function.json file and a JavaScript
function that uses the binding. The function uses a queue input binding for a queue that receives JSON in the
following format:
{
"name": "John Henry",
"employeeId": "123456",
"address": "A town nearby"
}
The function creates Azure Cosmos DB documents in the following format for each record:
{
"id": "John Henry-123456",
"name": "John Henry",
"employeeId": "123456",
"address": "A town nearby"
}
{
"name": "employeeDocument",
"type": "cosmosDB",
"databaseName": "MyDatabase",
"collectionName": "MyCollection",
"createIfNotExists": true,
"connectionStringSetting": "MyAccount_COSMOSDB",
"direction": "out"
}
context.bindings.employeeDocument = JSON.stringify({
id: context.bindings.myQueueItem.name + "-" + context.bindings.myQueueItem.employeeId,
name: context.bindings.myQueueItem.name,
employeeId: context.bindings.myQueueItem.employeeId,
address: context.bindings.myQueueItem.address
});
};
For bulk insert form the objects first and then run the stringify function. Here's the JavaScript code:
context.bindings.employeeDocument = JSON.stringify([
{
"id": "John Henry-123456",
"name": "John Henry",
"employeeId": "123456",
"address": "A town nearby"
},
{
"id": "John Doe-123457",
"name": "John Doe",
"employeeId": "123457",
"address": "A town far away"
}]);
};
The following example show how to write data to Cosmos DB using an output binding. The binding is declared
in the function's configuration file (functions.json), and take data from a queue message and writes out to a
Cosmos DB document.
{
"name":"EmployeeDocument",
"type":"cosmosDB",
"databaseName":"MyDatabase",
"collectionName":"MyCollection",
"createIfNotExists":true,
"connectionStringSetting":"MyStorageConnectionAppSetting",
"direction":"out"
}
In the run.ps1 file, the object returned from the function is mapped to an EmployeeDocument object, which is
persisted in the database.
param($QueueItem,$TriggerMetadata)
Push-OutputBinding-NameEmployeeDocument-Value@{
id=$QueueItem.name+'-'+$QueueItem.employeeId
name=$QueueItem.name
employeeId=$QueueItem.employeeId
address=$QueueItem.address
}
The following example demonstrates how to write a document to an Azure Cosmos DB database as the output
of a function.
The binding definition is defined in function.json where type is set to cosmosDB .
{
"scriptFile": "__init__.py",
"bindings": [
{
"authLevel": "function",
"type": "httpTrigger",
"direction": "in",
"name": "req",
"methods": [
"get",
"post"
]
},
{
"type": "cosmosDB",
"direction": "out",
"name": "doc",
"databaseName": "demodb",
"collectionName": "data",
"createIfNotExists": "true",
"connectionStringSetting": "AzureCosmosDBConnectionString"
},
{
"type": "http",
"direction": "out",
"name": "$return"
}
]
}
To write to the database, pass a document object to the set method of the database parameter.
import azure.functions as func
request_body = req.get_body()
doc.set(func.Document.from_json(request_body))
return 'OK'
Attributes
Both in-process and isolated process C# libraries use attributes to define the function. C# script instead uses a
function.json configuration file.
Functions 2.x+
Extension 4.x+ (preview)
Annotations
From the Java functions runtime library, use the @CosmosDBOutput annotation on parameters that write to Azure
Cosmos DB. The annotation supports the following properties:
name
connectionStringSetting
databaseName
collectionName
createIfNotExists
dataType
partitionKey
preferredLocations
useMultipleWriteLocations
Configuration
The following table explains the binding configuration properties that you set in the function.json file, where
properties differ by extension version:
Functions 2.x+
Extension 4.x+ (preview)
Usage
By default, when you write to the output parameter in your function, a document is created in your database.
This document has an automatically generated GUID as the document ID. You can specify the document ID of the
output document by specifying the id property in the JSON object passed to the output parameter.
NOTE
When you specify the ID of an existing document, it gets overwritten by the new output document.
Connections
The connectionStringSetting / connection and leaseConnectionStringSetting / leaseConnection properties are
references to environment configuration which specifies how the app should connect to Azure Cosmos DB. They
may specify:
The name of an application setting containing a connection string
The name of a shared prefix for multiple application settings, together defining an identity-based connection.
This option is only available for the connection and leaseConnection versions from version 4.x or higher of
the extension.
If the configured value is both an exact match for a single setting and a prefix match for other settings, the exact
match is used.
Connection string
The connection string for your database account should be stored in an application setting with a name
matching the value specified by the connection property of the binding configuration.
Identity-based connections
If you are using version 4.x or higher of the extension, instead of using a connection string with a secret, you can
have the app use an Azure Active Directory identity. To do this, you would define settings under a common
prefix which maps to the connection property in the trigger and binding configuration.
In this mode, the extension requires the following properties:
EN VIRO N M EN T VA RIA B L E
P RO P ERT Y T EM P L AT E DESC RIP T IO N EXA M P L E VA L UE
Additional properties may be set to customize the connection. See Common properties for identity-based
connections.
When hosted in the Azure Functions service, identity-based connections use a managed identity. The system-
assigned identity is used by default, although a user-assigned identity can be specified with the credential and
clientID properties. Note that configuring a user-assigned identity with a resource ID is not supported. When
run in other contexts, such as local development, your developer identity is used instead, although this can be
customized. See Local development with identity-based connections.
Grant permission to the identity
Whatever identity is being used must have permissions to perform the intended actions. You will need to assign
a role in Azure RBAC, using either built-in or custom roles which provide those permissions.
IMPORTANT
Some permissions might be exposed by the target service that are not necessary for all contexts. Where possible, adhere
to the principle of least privilege , granting the identity only required privileges. For example, if the app only needs to
be able to read from a data source, use a role that only has permission to read. It would be inappropriate to assign a role
that also allows writing to that service, as this would be excessive permission for a read operation. Similarly, you would
want to ensure the role assignment is scoped only over the resources that need to be read.
You will need to create a role assignment that provides access to your database account at runtime.
Management roles like Owner are not sufficient. The following table shows built-in roles that are recommended
when using the Cosmos DB extension in normal operation. Your application may require additional permissions
based on the code you write.
Next steps
Run a function when an Azure Cosmos DB document is created or modified (Trigger)
Read an Azure Cosmos DB document (Input binding)
Azure SQL bindings for Azure Functions overview
(preview)
8/2/2022 • 3 minutes to read • Edit Online
This set of articles explains how to work with Azure SQL bindings in Azure Functions. Azure Functions supports
input and output bindings for the Azure SQL and SQL Server products.
A C T IO N TYPE
Install extension
The extension NuGet package you install depends on the C# mode you're using in your function app:
In-process
Isolated process
Functions execute in the same process as the Functions host. To learn more, see Develop C# class library
functions using Azure Functions.
Add the extension to your project by installing this NuGet package.
Install bundle
The SQL bindings extension is part of a preview extension bundle, which is specified in your host.json project
file.
Preview Bundle v3.x
Preview Bundle v4.x
You can add the preview extension bundle by adding or replacing the following code in your host.json file:
{
"version": "2.0",
"extensionBundle": {
"id": "Microsoft.Azure.Functions.ExtensionBundle.Preview",
"version": "[3.*, 4.0.0)"
}
}
Functions runtime
NOTE
Python language support for the SQL bindings extension is available starting with v4.5.0 of the functions runtime. You
may need to update your install of Azure Functions Core Tools for local development. Learn more about determining the
runtime in Azure regions from the functions runtime documentation. Please see the tracking GitHub issue for the latest
update on availability.
Install bundle
The SQL bindings extension is part of a preview extension bundle, which is specified in your host.json project
file.
Preview Bundle v4.x
Preview Bundle v3.x
Python support isn't available with the SQL bindings extension in the v3 version of the functions runtime.
Update packages
Support for the SQL bindings extension is available in the 1.11.3b1 version of the Azure Functions Python
library. Add this version of the library to your functions project with an update to the line for azure-functions==
in the requirements.txt file in your Python Azure Functions project as seen in the following snippet:
azure-functions==1.11.3b1
Following setting the library version, update your application settings to isolate the dependencies by adding
PYTHON_ISOLATE_WORKER_DEPENDENCIES with the value 1 to your application settings. Locally, this is set in the
local.settings.json file as seen below:
"PYTHON_ISOLATE_WORKER_DEPENDENCIES": "1"
Support for Python durable functions with SQL bindings isn't yet available.
NOTE
In the current preview, Azure SQL bindings are only supported by C# class library functions, JavaScript functions, and
Python functions.
Next steps
Read data from a database (Input binding)
Save data to a database (Output binding)
Review ToDo API sample with Azure SQL bindings
Learn how to connect Azure Function to Azure SQL with managed identity
Use SQL bindings in Azure Stream Analytics
Azure SQL input binding for Azure Functions
(preview)
8/2/2022 • 10 minutes to read • Edit Online
When a function runs, the Azure SQL input binding retrieves data from a database and passes it to the input
parameter of the function.
For information on setup and configuration details, see the overview.
Examples
More samples for the Azure SQL input binding are available in the GitHub repository.
In-process
Isolated process
namespace AzureSQL.ToDo {
public class ToDoItem {
public Guid Id { get; set; }
public int? order { get; set; }
public string title { get; set; }
public string url { get; set; }
public bool? completed { get; set; }
}
}
using System.Collections.Generic;
using System.Linq;
using Microsoft.AspNetCore.Http;
using Microsoft.AspNetCore.Mvc;
using Microsoft.Azure.WebJobs;
using Microsoft.Azure.WebJobs.Extensions.Http;
namespace AzureSQLSamples
{
public static class GetToDoItem
{
[FunctionName("GetToDoItem")]
public static IActionResult Run(
[HttpTrigger(AuthorizationLevel.Anonymous, "get", Route = "gettodoitem")]
HttpRequest req,
[Sql("select [Id], [order], [title], [url], [completed] from dbo.ToDo where Id = @Id",
CommandType = System.Data.CommandType.Text,
Parameters = "@Id={Query.id}",
ConnectionStringSetting = "SqlConnectionString")]
IEnumerable<ToDoItem> toDoItem)
{
return new OkObjectResult(toDoItem.FirstOrDefault());
}
}
}
using System.Collections.Generic;
using Microsoft.AspNetCore.Http;
using Microsoft.AspNetCore.Mvc;
using Microsoft.Azure.WebJobs;
using Microsoft.Azure.WebJobs.Extensions.Http;
namespace AzureSQLSamples
{
public static class GetToDoItems
{
[FunctionName("GetToDoItems")]
public static IActionResult Run(
[HttpTrigger(AuthorizationLevel.Anonymous, "get", Route = "gettodoitems/{priority}")]
HttpRequest req,
[Sql("select [Id], [order], [title], [url], [completed] from dbo.ToDo where [Priority] >
@Priority",
CommandType = System.Data.CommandType.Text,
Parameters = "@Priority={priority}",
ConnectionStringSetting = "SqlConnectionString")]
IEnumerable<ToDoItem> toDoItems)
{
return new OkObjectResult(toDoItems);
}
}
}
HTTP trigger, delete rows
The following example shows a C# function that executes a stored procedure with input from the HTTP request
query parameter.
The stored procedure dbo.DeleteToDo must be created on the SQL database. In this example, the stored
procedure deletes a single record or all records depending on the value of the parameter.
using System;
using System.Collections.Generic;
using Microsoft.AspNetCore.Mvc;
using Microsoft.Azure.WebJobs;
using Microsoft.Azure.WebJobs.Extensions.Http;
using Microsoft.AspNetCore.Http;
using Microsoft.Extensions.Logging;
namespace AzureSQL.ToDo
{
public static class DeleteToDo
{
// delete all items or a specific item from querystring
// returns remaining items
// uses input binding with a stored procedure DeleteToDo to delete items and return remaining items
[FunctionName("DeleteToDo")]
public static IActionResult Run(
[HttpTrigger(AuthorizationLevel.Anonymous, "delete", Route = "DeleteFunction")] HttpRequest req,
ILogger log,
[Sql("DeleteToDo", CommandType = System.Data.CommandType.StoredProcedure,
Parameters = "@Id={Query.id}", ConnectionStringSetting = "SqlConnectionString")]
IEnumerable<ToDoItem> toDoItems)
{
return new OkObjectResult(toDoItems);
}
}
}
NOTE
In the current preview, Azure SQL bindings are only supported by C# class library functions, JavaScript functions, and
Python functions.
More samples for the Azure SQL input binding are available in the GitHub repository.
This section contains the following examples:
HTTP trigger, get multiple rows
HTTP trigger, get row by ID from query string
HTTP trigger, delete rows
The examples refer to a database table:
{
"authLevel": "anonymous",
"type": "httpTrigger",
"direction": "in",
"name": "req",
"methods": [
"get"
]
},
{
"type": "http",
"direction": "out",
"name": "res"
},
{
"name": "todoItems",
"type": "sql",
"direction": "in",
"commandText": "select [Id], [order], [title], [url], [completed] from dbo.ToDo",
"commandType": "Text",
"connectionStringSetting": "SqlConnectionString"
}
context.res = {
// status: 200, /* Defaults to 200 */
mimetype: "application/json",
body: todoItems
};
}
{
"authLevel": "anonymous",
"type": "httpTrigger",
"direction": "in",
"name": "req",
"methods": [
"get"
]
},
{
"type": "http",
"direction": "out",
"name": "res"
},
{
"name": "todoItem",
"type": "sql",
"direction": "in",
"commandText": "select [Id], [order], [title], [url], [completed] from dbo.ToDo where Id = @Id",
"commandType": "Text",
"parameters": "@Id = {Query.id}",
"connectionStringSetting": "SqlConnectionString"
}
context.res = {
// status: 200, /* Defaults to 200 */
mimetype: "application/json",
body: todoItem
};
}
{
"authLevel": "anonymous",
"type": "httpTrigger",
"direction": "in",
"name": "req",
"methods": [
"get"
]
},
{
"type": "http",
"direction": "out",
"name": "res"
},
{
"name": "todoItems",
"type": "sql",
"direction": "in",
"commandText": "DeleteToDo",
"commandType": "StoredProcedure",
"parameters": "@Id = {Query.id}",
"connectionStringSetting": "SqlConnectionString"
}
context.res = {
// status: 200, /* Defaults to 200 */
mimetype: "application/json",
body: todoItems
};
}
More samples for the Azure SQL input binding are available in the GitHub repository.
This section contains the following examples:
HTTP trigger, get multiple rows
HTTP trigger, get row by ID from query string
HTTP trigger, delete rows
The examples refer to a database table:
{
"authLevel": "anonymous",
"type": "httpTrigger",
"direction": "in",
"name": "req",
"methods": [
"get"
]
},
{
"type": "http",
"direction": "out",
"name": "$return"
},
{
"name": "todoItems",
"type": "sql",
"direction": "in",
"commandText": "select [Id], [order], [title], [url], [completed] from dbo.ToDo",
"commandType": "Text",
"connectionStringSetting": "SqlConnectionString"
}
return func.HttpResponse(
json.dumps(rows),
status_code=200,
mimetype="application/json"
)
{
"authLevel": "anonymous",
"type": "httpTrigger",
"direction": "in",
"name": "req",
"methods": [
"get"
]
},
{
"type": "http",
"direction": "out",
"name": "$return"
},
{
"name": "todoItem",
"type": "sql",
"direction": "in",
"commandText": "select [Id], [order], [title], [url], [completed] from dbo.ToDo where Id = @Id",
"commandType": "Text",
"parameters": "@Id = {Query.id}",
"connectionStringSetting": "SqlConnectionString"
}
return func.HttpResponse(
json.dumps(rows),
status_code=200,
mimetype="application/json"
)
{
"authLevel": "anonymous",
"type": "httpTrigger",
"direction": "in",
"name": "req",
"methods": [
"get"
]
},
{
"type": "http",
"direction": "out",
"name": "$return"
},
{
"name": "todoItems",
"type": "sql",
"direction": "in",
"commandText": "DeleteToDo",
"commandType": "StoredProcedure",
"parameters": "@Id = {Query.id}",
"connectionStringSetting": "SqlConnectionString"
}
return func.HttpResponse(
json.dumps(rows),
status_code=200,
mimetype="application/json"
)
Attributes
In C# class libraries, use the Sql attribute, which has the following properties:
AT T RIB UT E P RO P ERT Y DESC RIP T IO N
Configuration
The following table explains the binding configuration properties that you set in the function.json file.
name Required. The name of the variable that represents the query
results in function code.
When you're developing locally, add your application settings in the local.settings.json file in the Values
collection.
Usage
The attribute's constructor takes the SQL command text, the command type, parameters, and the connection
string setting name. The command can be a Transact-SQL (T-SQL) query with the command type
System.Data.CommandType.Text or stored procedure name with the command type
System.Data.CommandType.StoredProcedure . The connection string setting name corresponds to the application
setting (in local.settings.json for local development) that contains the connection string to the Azure SQL or
SQL Server instance.
Queries executed by the input binding are parameterized in Microsoft.Data.SqlClient to reduce the risk of SQL
injection from the parameter values passed into the binding.
Next steps
Save data to a database (Output binding)
Review ToDo API sample with Azure SQL bindings
Azure SQL output binding for Azure Functions
(preview)
8/2/2022 • 8 minutes to read • Edit Online
Examples
More samples for the Azure SQL output binding are available in the GitHub repository.
In-process
Isolated process
namespace AzureSQL.ToDo {
public class ToDoItem {
public Guid Id { get; set; }
public int? order { get; set; }
public string title { get; set; }
public string url { get; set; }
public bool? completed { get; set; }
}
}
namespace AzureSQL.ToDo
{
public static class PostToDo
{
// create a new ToDoItem from body object
// uses output binding to insert new item into ToDo table
[FunctionName("PostToDo")]
public static async Task<IActionResult> Run(
[HttpTrigger(AuthorizationLevel.Anonymous, "post", Route = "PostFunction")] HttpRequest req,
ILogger log,
[Sql("dbo.ToDo", ConnectionStringSetting = "SqlConnectionString")] IAsyncCollector<ToDoItem>
toDoItems)
{
string requestBody = await new StreamReader(req.Body).ReadToEndAsync();
ToDoItem toDoItem = JsonConvert.DeserializeObject<ToDoItem>(requestBody);
await toDoItems.AddAsync(toDoItem);
await toDoItems.FlushAsync();
List<ToDoItem> toDoItemList = new List<ToDoItem> { toDoItem };
await toDoItems.AddAsync(toDoItem);
await toDoItems.FlushAsync();
List<ToDoItem> toDoItemList = new List<ToDoItem> { toDoItem };
namespace AzureSQLSamples
{
public static class WriteRecordsAsync
{
[FunctionName("WriteRecordsAsync")]
public static async Task<IActionResult> Run(
[HttpTrigger(AuthorizationLevel.Anonymous, "post", Route = "addtodo-asynccollector")]
HttpRequest req,
[Sql("dbo.ToDo", ConnectionStringSetting = "SqlConnectionString")] IAsyncCollector<ToDoItem>
newItems)
{
string requestBody = await new StreamReader(req.Body).ReadToEndAsync();
var incomingItems = JsonConvert.DeserializeObject<ToDoItem[]>(requestBody);
foreach (ToDoItem newItem in incomingItems)
{
await newItems.AddAsync(newItem);
}
// Rows are upserted here
await newItems.FlushAsync();
NOTE
In the current preview, Azure SQL bindings are only supported by C# class library functions, JavaScript functions, and
Python functions.
More samples for the Azure SQL output binding are available in the GitHub repository.
This section contains the following examples:
HTTP trigger, write records to a table
HTTP trigger, write to two tables
The examples refer to a database table:
if (req.body) {
context.bindings.todoItems = req.body;
context.res = {
body: req.body,
mimetype: "application/json",
status: 201
}
} else {
context.res = {
status: 400,
body: "Error reading request body"
}
}
}
const newLog = {
RequestTimeStamp = Date.now(),
ItemCount = 1
}
if (req.body) {
context.bindings.todoItems = req.body;
context.bindings.requestLog = newLog;
context.res = {
body: req.body,
mimetype: "application/json",
status: 201
}
} else {
context.res = {
status: 400,
body: "Error reading request body"
}
}
}
More samples for the Azure SQL output binding are available in the GitHub repository.
This section contains the following examples:
HTTP trigger, write records to a table
HTTP trigger, write to two tables
The examples refer to a database table:
{
"authLevel": "anonymous",
"type": "httpTrigger",
"direction": "in",
"name": "req",
"methods": [
"post"
]
},
{
"type": "http",
"direction": "out",
"name": "$return"
},
{
"name": "todoItems",
"type": "sql",
"direction": "out",
"commandText": "dbo.ToDo",
"connectionStringSetting": "SqlConnectionString"
}
try:
req_body = req.get_json()
rows = list(map(lambda r: json.loads(r.to_json()), req_body))
except ValueError:
pass
if req_body:
todoItems.set(rows)
return func.HttpResponse(
todoItems.to_json(),
status_code=201,
mimetype="application/json"
)
else:
return func.HttpResponse(
"Error accessing request body",
status_code=400
)
try:
req_body = req.get_json()
rows = list(map(lambda r: json.loads(r.to_json()), req_body))
except ValueError:
pass
requestLog.set(func.SqlRow({
RequestTimeStamp: datetime.now(),
ItemCount: 1
}))
if req_body:
todoItems.set(rows)
return func.HttpResponse(
todoItems.to_json(),
status_code=201,
mimetype="application/json"
)
else:
return func.HttpResponse(
"Error accessing request body",
status_code=400
)
Attributes
In C# class libraries, use the Sql attribute, which has the following properties:
Configuration
The following table explains the binding configuration properties that you set in the function.json file.
name Required. The name of the variable that represents the entity
in function code.
F UN C T IO N . JSO N P RO P ERT Y DESC RIP T IO N
When you're developing locally, add your application settings in the local.settings.json file in the Values
collection.
Usage
The CommandText property is the name of the table where the data is to be stored. The connection string setting
name corresponds to the application setting that contains the connection string to the Azure SQL or SQL Server
instance.
The output bindings uses the T-SQL MERGE statement which requires SELECT permissions on the target
database.
Next steps
Read data from a database (Input binding)
Review ToDo API sample with Azure SQL bindings
Azure Event Grid bindings for Azure Functions
8/2/2022 • 4 minutes to read • Edit Online
This reference shows how to connect to Azure Event Grid using Azure Functions triggers and bindings.
Event Grid is an Azure service that sends HTTP requests to notify you about events that happen in publishers. A
publisher is the service or resource that originates the event. For example, an Azure blob storage account is a
publisher, and a blob upload or deletion is an event. Some Azure services have built-in support for publishing
events to Event Grid.
Event handlers receive and process events. Azure Functions is one of several Azure services that have built-in
support for handling Event Grid events. Functions provides an Event Grid trigger, which invokes a function when
an event is received from Event Grid. A similar output binding can be used to send events from your function to
an Event Grid custom topic.
You can also use an HTTP trigger to handle Event Grid Events. To learn more, see Receive events to an HTTP
endpoint. We recommend using the Event Grid trigger over HTTP trigger.
A C T IO N TYPE
Install extension
The extension NuGet package you install depends on the C# mode you're using in your function app:
In-process
Isolated process
C# script
Functions execute in the same process as the Functions host. To learn more, see Develop C# class library
functions using Azure Functions.
The functionality of the extension varies depending on the extension version:
Extension v3.x
Extension v2.x
Functions 1.x
This version of the extension supports updated Event Grid binding parameter types of
Azure.Messaging.CloudEvent and Azure.Messaging.EventGrid.EventGridEvent.
Add this version of the extension to your project by installing the NuGet package, version 3.x.
Install bundle
The Event Grid extension is part of an extension bundle, which is specified in your host.json project file. You may
need to modify this bundle to change the version of the Event Grid binding, or if bundles aren't already installed.
To learn more, see extension bundle.
Bundle v3.x
Bundle v2.x
Functions 1.x
You can add this version of the extension from the extension bundle v3 by adding or replacing the following
configuration in your host.json file:
{
"version": "2.0",
"extensionBundle": {
"id": "Microsoft.Azure.Functions.ExtensionBundle",
"version": "[3.3.0, 4.0.0)"
}
}
Next steps
Event Grid trigger
Event Grid output binding
Run a function when an Event Grid event is dispatched
Dispatch an Event Grid event
Azure Event Grid trigger for Azure Functions
8/2/2022 • 8 minutes to read • Edit Online
Use the function trigger to respond to an event sent to an event grid topic. To learn how to work with the Event
Grid trigger.
For information on setup and configuration details, see the overview.
NOTE
Event Grid triggers aren't natively supported in an internal load balancer App Service Environment (ASE). The trigger uses
an HTTP request that can't reach the function app without a gateway into the virtual network.
Example
For an HTTP trigger example, see Receive events to an HTTP endpoint.
The type of the input parameter used with an Event Grid trigger depends on these three factors:
Functions runtime version
Binding extension version
Modality of the C# function.
A C# function can be created using one of the following C# modes:
In-process class library: compiled C# function that runs in the same process as the Functions runtime.
Isolated process class library: compiled C# function that runs in a process isolated from the runtime. Isolated
process is required to support C# functions running on .NET 5.0.
C# script: used primarily when creating C# functions in the Azure portal.
In-process
Isolated process
C# Script
The following example shows a Functions version 3.x function that uses a CloudEvent binding parameter:
using Azure.Messaging;
using Microsoft.Azure.WebJobs;
using Microsoft.Azure.WebJobs.Extensions.EventGrid;
using Microsoft.Extensions.Logging;
namespace Company.Function
{
public static class CloudEventTriggerFunction
{
[FunctionName("CloudEventTriggerFunction")]
public static void Run(
ILogger logger,
[EventGridTrigger] CloudEvent e)
{
logger.LogInformation("Event received {type} {subject}", e.Type, e.Subject);
}
}
}
The following example shows a Functions version 3.x function that uses an EventGridEvent binding parameter:
using Microsoft.Azure.WebJobs;
using Microsoft.Azure.EventGrid.Models;
using Microsoft.Azure.WebJobs.Extensions.EventGrid;
using Microsoft.Extensions.Logging;
namespace Company.Function
{
public static class EventGridTriggerDemo
{
[FunctionName("EventGridTriggerDemo")]
public static void Run([EventGridTrigger] EventGridEvent eventGridEvent, ILogger log)
{
log.LogInformation(eventGridEvent.Data.ToString());
}
}
}
The following example shows a function that uses a JObject binding parameter:
using Microsoft.Azure.WebJobs;
using Microsoft.Azure.WebJobs.Extensions.EventGrid;
using Microsoft.Azure.WebJobs.Host;
using Newtonsoft.Json;
using Newtonsoft.Json.Linq;
using Microsoft.Extensions.Logging;
namespace Company.Function
{
public static class EventGridTriggerCSharp
{
[FunctionName("EventGridTriggerCSharp")]
public static void Run([EventGridTrigger] JObject eventGridEvent, ILogger log)
{
log.LogInformation(eventGridEvent.ToString(Formatting.Indented));
}
}
}
@FunctionName("eventGridMonitorString")
public void logEvent(
@EventGridTrigger(
name = "event"
)
String content,
final ExecutionContext context) {
context.getLogger().info("Event content: " + content);
}
import java.util.Date;
import java.util.Map;
Upon arrival, the event's JSON payload is de-serialized into the EventSchema POJO for use by the function. This
process allows the function to access the event's properties in an object-oriented way.
@FunctionName("eventGridMonitor")
public void logEvent(
@EventGridTrigger(
name = "event"
)
EventSchema event,
final ExecutionContext context) {
context.getLogger().info("Event content: ");
context.getLogger().info("Subject: " + event.subject);
context.getLogger().info("Time: " + event.eventTime); // automatically converted to Date by the
runtime
context.getLogger().info("Id: " + event.id);
context.getLogger().info("Data: " + event.data);
}
In the Java functions runtime library, use the EventGridTrigger annotation on parameters whose value would
come from EventGrid. Parameters with these annotations cause the function to run when an event arrives. This
annotation can be used with native Java types, POJOs, or nullable values using Optional<T> .
The following example shows a trigger binding in a function.json file and a JavaScript function that uses the
binding.
Here's the binding data in the function.json file:
{
"bindings": [
{
"type": "eventGridTrigger",
"name": "eventGridEvent",
"direction": "in"
}
],
"disabled": false
}
The following example shows how to configure an Event Grid trigger binding in the function.json file.
{
"bindings":[
{
"type":"eventGridTrigger",
"name":"eventGridEvent",
"direction":"in"
}
]
}
The Event Grid event is made available to the function via a parameter named eventGridEvent , as shown in the
following PowerShell example.
param($eventGridEvent,$TriggerMetadata)
#MakesuretopasshashtablestoOut-Stringsothey'reloggedcorrectly
$eventGridEvent|Out-String|Write-Host
The following example shows a trigger binding in a function.json file and a Python function that uses the
binding.
Here's the binding data in the function.json file:
{
"bindings": [
{
"type": "eventGridTrigger",
"name": "event",
"direction": "in"
}
],
"disabled": false,
"scriptFile": "__init__.py"
}
Here's the Python code:
import json
import logging
result = json.dumps({
'id': event.id,
'data': event.get_json(),
'topic': event.topic,
'subject': event.subject,
'event_type': event.event_type,
})
Attributes
Both in-process and isolated process C# libraries use the EventGridTrigger attribute. C# script instead uses a
function.json configuration file.
In-process
Isolated process
C# script
[FunctionName("EventGridTest")]
public static void EventGridTest([EventGridTrigger] JObject eventGridEvent, ILogger log)
{
Annotations
The EventGridTrigger annotation allows you to declaratively configure an Event Grid binding by providing
configuration values. See the example and configuration sections for more detail.
Configuration
The following table explains the binding configuration properties that you set in the function.json file. There are
no constructor parameters or properties to set in the EventGridTrigger attribute.
name Required - the variable name used in function code for the
parameter that receives the event data.
Extension v3.x
Extension v2.x
Functions 1.x
Event schema
Data for an Event Grid event is received as a JSON object in the body of an HTTP request. The JSON looks
similar to the following example:
[{
"topic":
"/subscriptions/{subscriptionid}/resourceGroups/eg0122/providers/Microsoft.Storage/storageAccounts/egblobsto
re",
"subject": "/blobServices/default/containers/{containername}/blobs/blobname.jpg",
"eventType": "Microsoft.Storage.BlobCreated",
"eventTime": "2018-01-23T17:02:19.6069787Z",
"id": "{guid}",
"data": {
"api": "PutBlockList",
"clientRequestId": "{guid}",
"requestId": "{guid}",
"eTag": "0x8D562831044DDD0",
"contentType": "application/octet-stream",
"contentLength": 2248,
"blobType": "BlockBlob",
"url": "https://egblobstore.blob.core.windows.net/{containername}/blobname.jpg",
"sequencer": "000000000000272D000000000003D60F",
"storageDiagnostics": {
"batchId": "{guid}"
}
},
"dataVersion": "",
"metadataVersion": "1"
}]
The example shown is an array of one element. Event Grid always sends an array and may send more than one
event in the array. The runtime invokes your function once for each array element.
The top-level properties in the event JSON data are the same among all event types, while the contents of the
data property are specific to each event type. The example shown is for a blob storage event.
For explanations of the common and event-specific properties, see Event properties in the Event Grid
documentation.
Next steps
Dispatch an Event Grid event
Azure Event Grid output binding for Azure
Functions
8/2/2022 • 10 minutes to read • Edit Online
Use the Event Grid output binding to write events to a custom topic. You must have a valid access key for the
custom topic. The Event Grid output binding doesn't support shared access signature (SAS) tokens.
For information on setup and configuration details, see How to work with Event Grid triggers and bindings in
Azure Functions.
IMPORTANT
The Event Grid output binding is only available for Functions 2.x and higher.
Example
The type of the output parameter used with an Event Grid output binding depends on the Functions runtime
version, the binding extension version, and the modality of the C# function. The C# function can be created
using one of the following C# modes:
In-process class library: compiled C# function that runs in the same process as the Functions runtime.
Isolated process class library: compiled C# function that runs in a process isolated from the runtime. Isolated
process is required to support C# functions running on .NET 5.0.
C# script: used primarily when creating C# functions in the Azure portal.
In-process
Isolated process
C# Script
The following example shows a C# function that binds to a CloudEvent using version 3.x of the extension, which
is in preview:
using System.Threading.Tasks;
using Azure.Messaging;
using Microsoft.AspNetCore.Http;
using Microsoft.AspNetCore.Mvc;
using Microsoft.Azure.WebJobs;
using Microsoft.Azure.WebJobs.Extensions.EventGrid;
using Microsoft.Azure.WebJobs.Extensions.Http;
namespace Azure.Extensions.WebJobs.Sample
{
public static class CloudEventBindingFunction
{
[FunctionName("CloudEventBindingFunction")]
public static async Task<IActionResult> RunAsync(
[HttpTrigger(AuthorizationLevel.Anonymous, "get", "post", Route = null)] HttpRequest req,
[EventGrid(TopicEndpointUri = "EventGridEndpoint", TopicKeySetting = "EventGridKey")]
IAsyncCollector<CloudEvent> eventCollector)
{
CloudEvent e = new CloudEvent("IncomingRequest", "IncomingRequest", await
req.ReadAsStringAsync());
await eventCollector.AddAsync(e);
return new OkResult();
}
}
}
The following example shows a C# function that binds to an EventGridEvent using version 3.x of the extension,
which is in preview:
using System.Threading.Tasks;
using Azure.Messaging.EventGrid;
using Microsoft.AspNetCore.Mvc;
using Microsoft.AspNetCore.Http;
using Microsoft.Azure.WebJobs;
using Microsoft.Azure.WebJobs.Extensions.Http;
using Microsoft.Azure.WebJobs.Extensions.EventGrid;
namespace Azure.Extensions.WebJobs.Sample
{
public static class EventGridEventBindingFunction
{
[FunctionName("EventGridEventBindingFunction")]
public static async Task<IActionResult> RunAsync(
[HttpTrigger(AuthorizationLevel.Anonymous, "get", "post", Route = null)] HttpRequest req,
[EventGrid(TopicEndpointUri = "EventGridEndpoint", TopicKeySetting = "EventGridKey")]
IAsyncCollector<EventGridEvent> eventCollector)
{
EventGridEvent e = new EventGridEvent(await req.ReadAsStringAsync(), "IncomingRequest",
"IncomingRequest", "1.0.0");
await eventCollector.AddAsync(e);
return new OkResult();
}
}
}
The following example shows how to use the IAsyncCollector interface to send a batch of messages.
[FunctionName("EventGridAsyncOutput")]
public static async Task Run(
[TimerTrigger("0 */5 * * * *")] TimerInfo myTimer,
[EventGrid(TopicEndpointUri = "MyEventGridTopicUriSetting", TopicKeySetting =
"MyEventGridTopicKeySetting")]IAsyncCollector<EventGridEvent> outputEvents,
ILogger log)
{
for (var i = 0; i < 3; i++)
{
var myEvent = new EventGridEvent("message-id-" + i, "subject-name", "event-data", "event-type",
DateTime.UtcNow, "1.0");
await outputEvents.AddAsync(myEvent);
}
}
The following example shows a Java function that writes a message to an Event Grid custom topic. The function
uses the binding's setValue method to output a string.
outputEvent.setValue(eventGridOutputDocument);
}
}
class EventGridEvent {
private String id;
private String eventType;
private String subject;
private String eventTime;
private String dataVersion;
private String data;
The following example shows the Event Grid output binding data in the function.json file.
{
"type": "eventGrid",
"name": "outputEvent",
"topicEndpointUri": "MyEventGridTopicUriSetting",
"topicKeySetting": "MyEventGridTopicKeySetting",
"direction": "out"
}
context.bindings.outputEvent = {
id: 'message-id',
subject: 'subject-name',
dataVersion: '1.0',
eventType: 'event-type',
data: "event-data",
eventTime: timeStamp
};
};
context.bindings.outputEvent = [];
context.bindings.outputEvent.push({
id: 'message-id-1',
subject: 'subject-name',
dataVersion: '1.0',
eventType: 'event-type',
data: "event-data",
eventTime: timeStamp
});
context.bindings.outputEvent.push({
id: 'message-id-2',
subject: 'subject-name',
dataVersion: '1.0',
eventType: 'event-type',
data: "event-data",
eventTime: timeStamp
});
};
The following example demonstrates how to configure a function to output an Event Grid event message. The
section where type is set to eventGrid configures the values needed to establish an Event Grid output binding.
{
"bindings":[
{
"type":"eventGrid",
"name":"outputEvent",
"topicEndpointUri":"MyEventGridTopicUriSetting",
"topicKeySetting":"MyEventGridTopicKeySetting",
"direction":"out"
},
{
"authLevel":"anonymous",
"type":"httpTrigger",
"direction":"in",
"name":"Request",
"methods":[
"get",
"post"
]
},
{
"type":"http",
"direction":"out",
"name":"Response"
}
]
}
In your function, use the Push-OutputBinding to send an event to a custom topic through the Event Grid output
binding.
usingnamespaceSystem.Net
#Inputbindingsarepassedinviaparamblock.
param($Request,$TriggerMetadata)
#WritetotheAzureFunctionslogstream.
Write-Host"PowerShellHTTPtriggerfunctionprocessedarequest."
#Interactwithqueryparametersorthebodyoftherequest.
$message=$Request.Query.Message
Push-OutputBinding-NameoutputEvent-Value@{
id="1"
EventType="testEvent"
Subject="testapp/testPublish"
EventTime="2020-08-27T21:03:07+00:00"
Data=@{
Message=$message
}
DataVersion="1.0"
}
Push-OutputBinding-NameResponse-Value([HttpResponseContext]@{
StatusCode=200
Body="OK"
})
The following example shows a trigger binding in a function.json file and a Python function that uses the
binding. It then sends in an event to the custom topic, as specified by the topicEndpointUri .
Here's the binding data in the function.json file:
{
"scriptFile": "__init__.py",
"bindings": [
{
"type": "eventGridTrigger",
"name": "eventGridEvent",
"direction": "in"
},
{
"type": "eventGrid",
"name": "outputEvent",
"topicEndpointUri": "MyEventGridTopicUriSetting",
"topicKeySetting": "MyEventGridTopicKeySetting",
"direction": "out"
}
],
"disabled": false
}
Here's the Python sample to send an event to a custom topic by setting the EventGridOutputEvent :
import logging
import azure.functions as func
import datetime
outputEvent.set(
func.EventGridOutputEvent(
id="test-id",
data={"tag1": "value1", "tag2": "value2"},
subject="test-subject",
event_type="test-event-1",
event_time=datetime.datetime.utcnow(),
data_version="1.0"))
Attributes
Both in-process and isolated process C# libraries use attribute to configure the binding. C# script instead uses a
function.json configuration file.
The attribute's constructor takes the name of an application setting that contains the name of the custom topic,
and the name of an application setting that contains the topic key.
In-process
Isolated process
C# Script
PA RA M ET ER DESC RIP T IO N
TopicEndpointUri The name of an app setting that contains the URI for the
custom topic, such as MyTopicEndpointUri .
PA RA M ET ER DESC RIP T IO N
TopicKeySetting The name of an app setting that contains an access key for
the custom topic.
Annotations
For Java classes, use the EventGridAttribute attribute.
The attribute's constructor takes the name of an app setting that contains the name of the custom topic, and the
name of an app setting that contains the topic key. For more information about these settings, see Output -
configuration. Here's an EventGridOutput attribute example:
Configuration
The following table explains the binding configuration properties that you set in the function.json file.
name The variable name used in function code that represents the
event.
topicEndpointUri The name of an app setting that contains the URI for the
custom topic, such as MyTopicEndpointUri .
topicKeySetting The name of an app setting that contains an access key for
the custom topic.
When you're developing locally, add your application settings in the local.settings.json file in the Values
collection.
IMPORTANT
Make sure that you set the value of the TopicEndpointUri configuration property to the name of an app setting that
contains the URI of the custom topic. Don't specify the URI of the custom topic directly in this property.
Usage
The parameter type supported by the Event Grid output binding depends on the Functions runtime version, the
extension package version, and the C# modality used.
Extension v3.x
Extension v2.x
Functions 1.x
Next steps
Dispatch an Event Grid event
Azure Event Hubs trigger and bindings for Azure
Functions
8/2/2022 • 6 minutes to read • Edit Online
This article explains how to work with Azure Event Hubs bindings for Azure Functions. Azure Functions supports
trigger and output bindings for Event Hubs.
A C T IO N TYPE
Install extension
The extension NuGet package you install depends on the C# mode you're using in your function app:
In-process
Isolated process
C# script
Functions execute in the same process as the Functions host. To learn more, see Develop C# class library
functions using Azure Functions.
The functionality of the extension varies depending on the extension version:
Extension v5.x+
Extension v3.x+
Functions v1.x
This version introduces the ability to connect using an identity instead of a secret. For a tutorial on configuring
your function apps with managed identities, see the creating a function app with identity-based connections
tutorial.
This version uses the newer Event Hubs binding type Azure.Messaging.EventHubs.EventData.
This extension version is available by installing the NuGet package, version 5.x.
Install bundle
The Event Hubs extension is part of an extension bundle, which is specified in your host.json project file. You
may need to modify this bundle to change the version of the Event Grid binding, or if bundles aren't already
installed. To learn more, see extension bundle.
Bundle v3.x
Bundle v2.x
Functions v1.x
This version introduces the ability to connect using an identity instead of a secret. For a tutorial on configuring
your function apps with managed identities, see the creating a function app with identity-based connections
tutorial.
You can add this version of the extension from the extension bundle v3 by adding or replacing the following
code in your host.json file:
{
"version": "2.0",
"extensionBundle": {
"id": "Microsoft.Azure.Functions.ExtensionBundle",
"version": "[3.3.0, 4.0.0)"
}
}
host.json settings
The host.json file contains settings that control behavior for the Event Hubs trigger. The configuration is different
depending on the extension version.
Extension v5.x+
Extension v3.x+
Functions v1.x
{
"version": "2.0",
"extensions": {
"eventHubs": {
"maxEventBatchSize" : 10,
"batchCheckpointFrequency" : 5,
"prefetchCount" : 300,
"transportType" : "amqpWebSockets",
"webProxy" : "https://proxyserver:8080",
"customEndpointAddress" : "amqps://company.gateway.local",
"initialOffsetOptions" : {
"type" : "fromStart",
"enqueuedTimeUtc" : ""
},
"clientRetryOptions":{
"mode" : "exponential",
"tryTimeout" : "00:01:00",
"delay" : "00:00:00.80",
"maximumDelay" : "00:01:00",
"maximumRetries" : 3
}
}
}
}
For a reference of host.json in Azure Functions 2.x and beyond, see host.json reference for Azure Functions.
Next steps
Respond to events sent to an event hub event stream (Trigger)
Write events to an event stream (Output binding)
Azure Event Hubs trigger for Azure Functions
8/2/2022 • 13 minutes to read • Edit Online
This article explains how to work with Azure Event Hubs trigger for Azure Functions. Azure Functions supports
trigger and output bindings for Event Hubs.
For information on setup and configuration details, see the overview.
Use the function trigger to respond to an event sent to an event hub event stream. You must have read access to
the underlying event hub to set up the trigger. When the function is triggered, the message passed to the
function is typed as a string.
Example
In-process
Isolated process
C# Script
The following example shows a C# function that logs the message body of the Event Hubs trigger.
[FunctionName("EventHubTriggerCSharp")]
public void Run([EventHubTrigger("samples-workitems", Connection = "EventHubConnectionAppSetting")] string
myEventHubMessage, ILogger log)
{
log.LogInformation($"C# function triggered to process a message: {myEventHubMessage}");
}
To get access to event metadata in function code, bind to an EventData object. You can also access the same
properties by using binding expressions in the method signature. The following example shows both ways to
get the same data:
[FunctionName("EventHubTriggerCSharp")]
public void Run(
[EventHubTrigger("samples-workitems", Connection = "EventHubConnectionAppSetting")] EventData
myEventHubMessage,
DateTime enqueuedTimeUtc,
Int64 sequenceNumber,
string offset,
ILogger log)
{
log.LogInformation($"Event: {Encoding.UTF8.GetString(myEventHubMessage.Body)}");
// Metadata accessed by binding to EventData
log.LogInformation($"EnqueuedTimeUtc={myEventHubMessage.SystemProperties.EnqueuedTimeUtc}");
log.LogInformation($"SequenceNumber={myEventHubMessage.SystemProperties.SequenceNumber}");
log.LogInformation($"Offset={myEventHubMessage.SystemProperties.Offset}");
// Metadata accessed by using binding expressions in method parameters
log.LogInformation($"EnqueuedTimeUtc={enqueuedTimeUtc}");
log.LogInformation($"SequenceNumber={sequenceNumber}");
log.LogInformation($"Offset={offset}");
}
[FunctionName("EventHubTriggerCSharp")]
public void Run([EventHubTrigger("samples-workitems", Connection = "EventHubConnectionAppSetting")]
EventData[] eventHubMessages, ILogger log)
{
foreach (var message in eventHubMessages)
{
log.LogInformation($"C# function triggered to process a message:
{Encoding.UTF8.GetString(message.Body)}");
log.LogInformation($"EnqueuedTimeUtc={message.SystemProperties.EnqueuedTimeUtc}");
}
}
The following example shows an Event Hubs trigger binding in a function.json file and a JavaScript function that
uses the binding. The function reads event metadata and logs the message.
The following example shows an Event Hubs binding data in the function.json file, which is different for version
1.x of the Functions runtime compared to later versions.
Functions 2.x+
Functions 1.x
{
"type": "eventHubTrigger",
"name": "myEventHubMessage",
"direction": "in",
"eventHubName": "MyEventHub",
"connection": "myEventHubReadConnectionAppSetting"
}
context.done();
};
To receive events in a batch, set cardinality to many in the function.json file, as shown in the following
examples.
Functions 2.x+
Functions 1.x
{
"type": "eventHubTrigger",
"name": "eventHubMessages",
"direction": "in",
"eventHubName": "MyEventHub",
"cardinality": "many",
"connection": "myEventHubReadConnectionAppSetting"
}
context.done();
};
{
"type": "eventHubTrigger",
"name": "event",
"direction": "in",
"eventHubName": "MyEventHub",
"connection": "myEventHubReadConnectionAppSetting"
}
import logging
import azure.functions as func
# Metadata
for key in event.metadata:
logging.info(f'Metadata: {key} = {event.metadata[key]}')
The following example shows an Event Hubs trigger binding which logs the message body of the Event Hubs
trigger.
@FunctionName("ehprocessor")
public void eventHubProcessor(
@EventHubTrigger(name = "msg",
eventHubName = "myeventhubname",
connection = "myconnvarname") String message,
final ExecutionContext context )
{
context.getLogger().info(message);
}
In the Java functions runtime library, use the EventHubTrigger annotation on parameters whose value comes
from the event hub. Parameters with these annotations cause the function to run when an event arrives. This
annotation can be used with native Java types, POJOs, or nullable values using Optional<T> .
Attributes
Both in-process and isolated process C# libraries use attribute to configure the trigger. C# script instead uses a
function.json configuration file.
In-process
Isolated process
C# Script
In C# class libraries, use the EventHubTriggerAttribute, which supports the following properties.
EventHubName The name of the event hub. When the event hub name is
also present in the connection string, that value overrides
this property at runtime. Can be referenced in app settings,
like %eventHubName%
Annotations
In the Java functions runtime library, use the EventHubTrigger annotation, which supports the following settings:
name
dataType
eventHubName
connection
cardinality
consumerGroup
Configuration
The following table explains the trigger configuration properties that you set in the function.json file, which
differs by runtime version.
Functions 2.x+
Functions 1.x
name The name of the variable that represents the event item in
function code.
eventHubName The name of the event hub. When the event hub name is
also present in the connection string, that value overrides
this property at runtime. Can be referenced via app settings
%eventHubName%
When you're developing locally, add your application settings in the local.settings.json file in the Values
collection.
Usage
To learn more about how Event Hubs trigger and IoT Hub trigger scales, see Event Hubs trigger.
The parameter type supported by the Event Hubs output binding depends on the Functions runtime version, the
extension package version, and the C# modality used.
Extension v5.x+
Extension v3.x+
Event metadata
The Event Hubs trigger provides several metadata properties. Metadata properties can be used as part of
binding expressions in other bindings or as parameters in your code. The properties come from the EventData
class.
See code examples that use these properties earlier in this article.
Connections
The connection property is a reference to environment configuration which specifies how the app should
connect to Event Hubs. It may specify:
The name of an application setting containing a connection string
The name of a shared prefix for multiple application settings, together defining an identity-based connection.
If the configured value is both an exact match for a single setting and a prefix match for other settings, the exact
match is used.
Connection string
Obtain this connection string by clicking the Connection Information button for the namespace, not the event
hub itself. The connection string must be for an Event Hubs namespace, not the event hub itself.
When used for triggers, the connection string must have at least "read" permissions to activate the function.
When used for output bindings, the connection string must have "send" permissions to send messages to the
event stream.
This connection string should be stored in an application setting with a name matching the value specified by
the connection property of the binding configuration.
Identity-based connections
If you are using version 5.x or higher of the extension, instead of using a connection string with a secret, you can
have the app use an Azure Active Directory identity. To do this, you would define settings under a common
prefix which maps to the connection property in the trigger and binding configuration.
In this mode, the extension requires the following properties:
NOTE
The environment variable provided must currently be prefixed by AzureWebJobs to work in the Consumption plan. In
Premium plans, this prefix is not required.
EN VIRO N M EN T VA RIA B L E
P RO P ERT Y T EM P L AT E DESC RIP T IO N EXA M P L E VA L UE
Additional properties may be set to customize the connection. See Common properties for identity-based
connections.
NOTE
When using Azure App Configuration or Key Vault to provide settings for Managed Identity connections, setting names
should use a valid key separator such as : or / in place of the __ to ensure names are resolved correctly.
For example, <CONNECTION_NAME_PREFIX>:fullyQualifiedNamespace .
When hosted in the Azure Functions service, identity-based connections use a managed identity. The system-
assigned identity is used by default, although a user-assigned identity can be specified with the credential and
clientID properties. Note that configuring a user-assigned identity with a resource ID is not supported. When
run in other contexts, such as local development, your developer identity is used instead, although this can be
customized. See Local development with identity-based connections.
Grant permission to the identity
Whatever identity is being used must have permissions to perform the intended actions. You will need to assign
a role in Azure RBAC, using either built-in or custom roles which provide those permissions.
IMPORTANT
Some permissions might be exposed by the target service that are not necessary for all contexts. Where possible, adhere
to the principle of least privilege , granting the identity only required privileges. For example, if the app only needs to
be able to read from a data source, use a role that only has permission to read. It would be inappropriate to assign a role
that also allows writing to that service, as this would be excessive permission for a read operation. Similarly, you would
want to ensure the role assignment is scoped only over the resources that need to be read.
You will need to create a role assignment that provides access to your event hub at runtime. The scope of the
role assignment must be for an Event Hubs namespace, not the event hub itself. Management roles like Owner
are not sufficient. The following table shows built-in roles that are recommended when using the Event Hubs
extension in normal operation. Your application may require additional permissions based on the code you
write.
B IN DIN G T Y P E EXA M P L E B UILT - IN RO L ES
Trigger Azure Event Hubs Data Receiver, Azure Event Hubs Data
Owner
host.json settings
The host.json file contains settings that control Event Hubs trigger behavior. See the host.json settings section for
details regarding available settings.
Next steps
Write events to an event stream (Output binding)
Azure Event Hubs output binding for Azure
Functions
8/2/2022 • 12 minutes to read • Edit Online
This article explains how to work with Azure Event Hubs bindings for Azure Functions. Azure Functions supports
trigger and output bindings for Event Hubs.
For information on setup and configuration details, see the overview.
Use the Event Hubs output binding to write events to an event stream. You must have send permission to an
event hub to write events to it.
Make sure the required package references are in place before you try to implement an output binding.
Example
In-process
Isolated process
C# Script
The following example shows a C# function that writes a message to an event hub, using the method return
value as the output:
[FunctionName("EventHubOutput")]
[return: EventHub("outputEventHubMessage", Connection = "EventHubConnectionAppSetting")]
public static string Run([TimerTrigger("0 */5 * * * *")] TimerInfo myTimer, ILogger log)
{
log.LogInformation($"C# Timer trigger function executed at: {DateTime.Now}");
return $"{DateTime.Now}";
}
The following example shows how to use the IAsyncCollector interface to send a batch of messages. This
scenario is common when you are processing messages coming from one Event Hub and sending the result to
another Event Hub.
[FunctionName("EH2EH")]
public static async Task Run(
[EventHubTrigger("source", Connection = "EventHubConnectionAppSetting")] EventData[] events,
[EventHub("dest", Connection = "EventHubConnectionAppSetting")]IAsyncCollector<string> outputEvents,
ILogger log)
{
foreach (EventData eventData in events)
{
// do some processing:
var myProcessedEvent = DoSomething(eventData);
The following example shows an event hub trigger binding in a function.json file and a function that uses the
binding. The function writes an output message to an event hub.
The following example shows an Event Hubs binding data in the function.json file, which is different for version
1.x of the Functions runtime compared to later versions.
Functions 2.x+
Functions 1.x
{
"type": "eventHub",
"name": "outputEventHubMessage",
"eventHubName": "myeventhub",
"connection": "MyEventHubSendAppSetting",
"direction": "out"
}
module.exports = function(context) {
var timeStamp = new Date().toISOString();
var message = 'Message created at: ' + timeStamp;
context.bindings.outputEventHubMessage = [];
{
"type": "eventHub",
"name": "$return",
"eventHubName": "myeventhub",
"connection": "MyEventHubSendAppSetting",
"direction": "out"
}
The following example shows a Java function that writes a message containing the current time to an Event Hub.
@FunctionName("sendTime")
@EventHubOutput(name = "event", eventHubName = "samples-workitems", connection = "AzureEventHubConnection")
public String sendTime(
@TimerTrigger(name = "sendTimeTrigger", schedule = "0 */5 * * * *") String timerInfo) {
return LocalDateTime.now().toString();
}
In the Java functions runtime library, use the @EventHubOutput annotation on parameters whose value would be
published to Event Hub. The parameter should be of type OutputBinding<T> , where T is a POJO or any native
Java type.
Attributes
Both in-process and isolated process C# libraries use attribute to configure the binding. C# script instead uses a
function.json configuration file.
In-process
Isolated process
C# Script
Use the EventHubAttribute to define an output binding to an event hub, which supports the following
properties.
EventHubName The name of the event hub. When the event hub name is
also present in the connection string, that value overrides
this property at runtime.
Annotations
In the Java functions runtime library, use the EventHubOutput annotation on parameters whose value would be
published to Event Hub. The following settings are supported on the annotation:
name
dataType
eventHubName
connection
Configuration
The following table explains the binding configuration properties that you set in the function.json file, which
differs by runtime version.
Functions 2.x+
Functions 1.x
name The variable name used in function code that represents the
event.
eventHubName Functions 2.x and higher. The name of the event hub. When
the event hub name is also present in the connection string,
that value overrides this property at runtime.
When you're developing locally, add your application settings in the local.settings.json file in the Values
collection.
Usage
The parameter type supported by the Event Hubs output binding depends on the Functions runtime version, the
extension package version, and the C# modality used.
Extension v5.x+
Extension v3.x+
Connections
The connection property is a reference to environment configuration which specifies how the app should
connect to Event Hubs. It may specify:
The name of an application setting containing a connection string
The name of a shared prefix for multiple application settings, together defining an identity-based connection.
If the configured value is both an exact match for a single setting and a prefix match for other settings, the exact
match is used.
Connection string
Obtain this connection string by clicking the Connection Information button for the namespace, not the event
hub itself. The connection string must be for an Event Hubs namespace, not the event hub itself.
When used for triggers, the connection string must have at least "read" permissions to activate the function.
When used for output bindings, the connection string must have "send" permissions to send messages to the
event stream.
This connection string should be stored in an application setting with a name matching the value specified by
the connection property of the binding configuration.
Identity-based connections
If you are using version 5.x or higher of the extension, instead of using a connection string with a secret, you can
have the app use an Azure Active Directory identity. To do this, you would define settings under a common
prefix which maps to the connection property in the trigger and binding configuration.
In this mode, the extension requires the following properties:
NOTE
The environment variable provided must currently be prefixed by AzureWebJobs to work in the Consumption plan. In
Premium plans, this prefix is not required.
EN VIRO N M EN T VA RIA B L E
P RO P ERT Y T EM P L AT E DESC RIP T IO N EXA M P L E VA L UE
Additional properties may be set to customize the connection. See Common properties for identity-based
connections.
NOTE
When using Azure App Configuration or Key Vault to provide settings for Managed Identity connections, setting names
should use a valid key separator such as : or / in place of the __ to ensure names are resolved correctly.
For example, <CONNECTION_NAME_PREFIX>:fullyQualifiedNamespace .
When hosted in the Azure Functions service, identity-based connections use a managed identity. The system-
assigned identity is used by default, although a user-assigned identity can be specified with the credential and
clientID properties. Note that configuring a user-assigned identity with a resource ID is not supported. When
run in other contexts, such as local development, your developer identity is used instead, although this can be
customized. See Local development with identity-based connections.
Grant permission to the identity
Whatever identity is being used must have permissions to perform the intended actions. You will need to assign
a role in Azure RBAC, using either built-in or custom roles which provide those permissions.
IMPORTANT
Some permissions might be exposed by the target service that are not necessary for all contexts. Where possible, adhere
to the principle of least privilege , granting the identity only required privileges. For example, if the app only needs to
be able to read from a data source, use a role that only has permission to read. It would be inappropriate to assign a role
that also allows writing to that service, as this would be excessive permission for a read operation. Similarly, you would
want to ensure the role assignment is scoped only over the resources that need to be read.
You will need to create a role assignment that provides access to your event hub at runtime. The scope of the
role assignment must be for an Event Hubs namespace, not the event hub itself. Management roles like Owner
are not sufficient. The following table shows built-in roles that are recommended when using the Event Hubs
extension in normal operation. Your application may require additional permissions based on the code you
write.
Trigger Azure Event Hubs Data Receiver, Azure Event Hubs Data
Owner
Next steps
Respond to events sent to an event hub event stream (Trigger)
Azure IoT Hub bindings for Azure Functions
8/2/2022 • 7 minutes to read • Edit Online
This set of articles explains how to work with Azure Functions bindings for IoT Hub. The IoT Hub support is
based on the Azure Event Hubs Binding.
IMPORTANT
While the following code samples use the Event Hub API, the given syntax is applicable for IoT Hub functions.
A C T IO N TYPE
Install extension
The extension NuGet package you install depends on the C# mode you're using in your function app:
In-process
Isolated process
C# script
Functions execute in the same process as the Functions host. To learn more, see Develop C# class library
functions using Azure Functions.
The functionality of the extension varies depending on the extension version:
Extension v5.x+
Extension v3.x+
Functions v1.x
This version introduces the ability to connect using an identity instead of a secret. For a tutorial on configuring
your function apps with managed identities, see the creating a function app with identity-based connections
tutorial.
This version uses the newer Event Hubs binding type Azure.Messaging.EventHubs.EventData.
This extension version is available by installing the NuGet package, version 5.x.
Install bundle
The Event Hubs extension is part of an extension bundle, which is specified in your host.json project file. You
may need to modify this bundle to change the version of the Event Grid binding, or if bundles aren't already
installed. To learn more, see extension bundle.
Bundle v3.x
Bundle v2.x
Functions v1.x
This version introduces the ability to connect using an identity instead of a secret. For a tutorial on configuring
your function apps with managed identities, see the creating a function app with identity-based connections
tutorial.
You can add this version of the extension from the extension bundle v3 by adding or replacing the following
code in your host.json file:
{
"version": "2.0",
"extensionBundle": {
"id": "Microsoft.Azure.Functions.ExtensionBundle",
"version": "[3.3.0, 4.0.0)"
}
}
host.json settings
The host.json file contains settings that control behavior for the Event Hubs trigger. The configuration is different
depending on the extension version.
Extension v5.x+
Extension v3.x+
Functions v1.x
{
"version": "2.0",
"extensions": {
"eventHubs": {
"maxEventBatchSize" : 10,
"batchCheckpointFrequency" : 5,
"prefetchCount" : 300,
"transportType" : "amqpWebSockets",
"webProxy" : "https://proxyserver:8080",
"customEndpointAddress" : "amqps://company.gateway.local",
"initialOffsetOptions" : {
"type" : "fromStart",
"enqueuedTimeUtc" : ""
},
"clientRetryOptions":{
"mode" : "exponential",
"tryTimeout" : "00:01:00",
"delay" : "00:00:00.80",
"maximumDelay" : "00:01:00",
"maximumRetries" : 3
}
}
}
}
For a reference of host.json in Azure Functions 2.x and beyond, see host.json reference for Azure Functions.
Next steps
Respond to events sent to an event hub event stream (Trigger)
Write events to an event stream (Output binding)
Azure IoT Hub trigger for Azure Functions
8/2/2022 • 11 minutes to read • Edit Online
This article explains how to work with Azure Functions bindings for IoT Hub. The IoT Hub support is based on
the Azure Event Hubs Binding.
For information on setup and configuration details, see the overview.
IMPORTANT
While the following code samples use the Event Hub API, the given syntax is applicable for IoT Hub functions.
Use the function trigger to respond to an event sent to an event hub event stream. You must have read access to
the underlying event hub to set up the trigger. When the function is triggered, the message passed to the
function is typed as a string.
Example
In-process
Isolated process
C# Script
The following example shows a C# function that logs the message body of the Event Hubs trigger.
[FunctionName("EventHubTriggerCSharp")]
public void Run([EventHubTrigger("samples-workitems", Connection = "EventHubConnectionAppSetting")] string
myEventHubMessage, ILogger log)
{
log.LogInformation($"C# function triggered to process a message: {myEventHubMessage}");
}
To get access to event metadata in function code, bind to an EventData object. You can also access the same
properties by using binding expressions in the method signature. The following example shows both ways to
get the same data:
[FunctionName("EventHubTriggerCSharp")]
public void Run(
[EventHubTrigger("samples-workitems", Connection = "EventHubConnectionAppSetting")] EventData
myEventHubMessage,
DateTime enqueuedTimeUtc,
Int64 sequenceNumber,
string offset,
ILogger log)
{
log.LogInformation($"Event: {Encoding.UTF8.GetString(myEventHubMessage.Body)}");
// Metadata accessed by binding to EventData
log.LogInformation($"EnqueuedTimeUtc={myEventHubMessage.SystemProperties.EnqueuedTimeUtc}");
log.LogInformation($"SequenceNumber={myEventHubMessage.SystemProperties.SequenceNumber}");
log.LogInformation($"Offset={myEventHubMessage.SystemProperties.Offset}");
// Metadata accessed by using binding expressions in method parameters
log.LogInformation($"EnqueuedTimeUtc={enqueuedTimeUtc}");
log.LogInformation($"SequenceNumber={sequenceNumber}");
log.LogInformation($"Offset={offset}");
}
NOTE
When receiving in a batch you cannot bind to method parameters like in the above example with
DateTime enqueuedTimeUtc and must receive these from each EventData object
[FunctionName("EventHubTriggerCSharp")]
public void Run([EventHubTrigger("samples-workitems", Connection = "EventHubConnectionAppSetting")]
EventData[] eventHubMessages, ILogger log)
{
foreach (var message in eventHubMessages)
{
log.LogInformation($"C# function triggered to process a message:
{Encoding.UTF8.GetString(message.Body)}");
log.LogInformation($"EnqueuedTimeUtc={message.SystemProperties.EnqueuedTimeUtc}");
}
}
The following example shows an Event Hubs trigger binding in a function.json file and a JavaScript function that
uses the binding. The function reads event metadata and logs the message.
The following example shows an Event Hubs binding data in the function.json file, which is different for version
1.x of the Functions runtime compared to later versions.
Functions 2.x+
Functions 1.x
{
"type": "eventHubTrigger",
"name": "myEventHubMessage",
"direction": "in",
"eventHubName": "MyEventHub",
"connection": "myEventHubReadConnectionAppSetting"
}
context.done();
};
To receive events in a batch, set cardinality to many in the function.json file, as shown in the following
examples.
Functions 2.x+
Functions 1.x
{
"type": "eventHubTrigger",
"name": "eventHubMessages",
"direction": "in",
"eventHubName": "MyEventHub",
"cardinality": "many",
"connection": "myEventHubReadConnectionAppSetting"
}
context.done();
};
{
"type": "eventHubTrigger",
"name": "event",
"direction": "in",
"eventHubName": "MyEventHub",
"connection": "myEventHubReadConnectionAppSetting"
}
# Metadata
for key in event.metadata:
logging.info(f'Metadata: {key} = {event.metadata[key]}')
The following example shows an Event Hubs trigger binding which logs the message body of the Event Hubs
trigger.
@FunctionName("ehprocessor")
public void eventHubProcessor(
@EventHubTrigger(name = "msg",
eventHubName = "myeventhubname",
connection = "myconnvarname") String message,
final ExecutionContext context )
{
context.getLogger().info(message);
}
In the Java functions runtime library, use the EventHubTrigger annotation on parameters whose value comes
from the event hub. Parameters with these annotations cause the function to run when an event arrives. This
annotation can be used with native Java types, POJOs, or nullable values using Optional<T> .
Attributes
Both in-process and isolated process C# libraries use attribute to configure the trigger. C# script instead uses a
function.json configuration file.
In-process
Isolated process
C# Script
In C# class libraries, use the EventHubTriggerAttribute, which supports the following properties.
EventHubName The name of the event hub. When the event hub name is
also present in the connection string, that value overrides
this property at runtime. Can be referenced in app settings,
like %eventHubName%
Configuration
The following table explains the trigger configuration properties that you set in the function.json file, which
differs by runtime version.
Functions 2.x+
Functions 1.x
name The name of the variable that represents the event item in
function code.
eventHubName The name of the event hub. When the event hub name is
also present in the connection string, that value overrides
this property at runtime. Can be referenced via app settings
%eventHubName%
When you're developing locally, add your application settings in the local.settings.json file in the Values
collection.
Usage
To learn more about how Event Hubs trigger and IoT Hub trigger scales, see Event Hubs trigger.
The parameter type supported by the Event Hubs output binding depends on the Functions runtime version, the
extension package version, and the C# modality used.
Extension v5.x+
Extension v3.x+
Event metadata
The Event Hubs trigger provides several metadata properties. Metadata properties can be used as part of
binding expressions in other bindings or as parameters in your code. The properties come from the EventData
class.
See code examples that use these properties earlier in this article.
Connections
The connection property is a reference to environment configuration that contains name of an application
setting containing a connection string. You can get this connection string by selecting the Connection
Information button for the namespace. The connection string must be for an Event Hubs namespace, not the
event hub itself.
The connection string must have at least "read" permissions to activate the function.
This connection string should be stored in an application setting with a name matching the value specified by
the connection property of the binding configuration.
NOTE
Identity-based connections aren't supported by the IoT Hub trigger. If you need to use managed identities end-to-end,
you can instead use IoT Hub Routing to send data to an event hub you control. In that way, outbound routing can be
authenticated with managed identity the event can be read from that event hub using managed identity.
host.json properties
The host.json file contains settings that control Event Hub trigger behavior. See the host.json settings section for
details regarding available settings.
Next steps
Write events to an event stream (Output binding)
Apache Kafka bindings for Azure Functions
overview
8/2/2022 • 2 minutes to read • Edit Online
The Kafka extension for Azure Functions lets you write values out to Apache Kafka topics by using an output
binding. You can also use a trigger to invoke your functions in response to messages in Kafka topics.
IMPORTANT
Kafka bindings are only available for Functions on the Elastic Premium Plan and Dedicated (App Service) plan. They are
only supported on version 3.x and later version of the Functions runtime.
A C T IO N TYPE
Install extension
The extension NuGet package you install depends on the C# mode you're using in your function app:
In-process
Isolated process
Functions execute in the same process as the Functions host. To learn more, see Develop C# class library
functions using Azure Functions.
Add the extension to your project by installing this NuGet package.
Install bundle
The Kafka extension is part of an extension bundle, which is specified in your host.json project file. When you
create a project that targets Functions version 3.x or later, you should already have this bundle installed. To learn
more, see extension bundle.
In the Azure portal, in your function app, choose Configuration and on the Function runtime settings tab
turn Runtime scale monitoring to On .
host.json settings
This section describes the configuration settings available for this binding in versions 3.x and higher. Settings in
the host.json file apply to all functions in a function app instance. For more information about function app
configuration settings in versions 3.x and later versions, see the host.json reference for Azure Functions.
{
"version": "2.0",
"extensions": {
"kafka": {
"maxBatchSize": 64,
"SubscriberIntervalInSeconds": 1,
"ExecutorChannelCapacity": 1,
"ChannelFullRetryIntervalInMs": 50
}
}
}
The following properties, which are inherited from the Apache Kafka C/C++ client library, are also supported in
the kafka section of host.json, for either triggers or both output bindings and triggers:
Next steps
Run a function from an Apache Kafka event stream
Apache Kafka trigger for Azure Functions
8/2/2022 • 23 minutes to read • Edit Online
You can use the Apache Kafka trigger in Azure Functions to run your function code in response to messages in
Kafka topics. You can also use a Kafka output binding to write from your function to a topic. For information on
setup and configuration details, see Apache Kafka bindings for Azure Functions overview.
IMPORTANT
Kafka bindings are only available for Functions on the Elastic Premium Plan and Dedicated (App Service) plan. They are
only supported on version 3.x and later version of the Functions runtime.
Example
The usage of the trigger depends on the C# modality used in your function app, which can be one of the
following modes:
In-process
Isolated process
An in-process class library is a compiled C# function runs in the same process as the Functions runtime.
The attributes you use depend on the specific event provider.
Confluent
Event Hubs
The following example shows a C# function that reads and logs the Kafka message as a Kafka event:
[FunctionName("KafkaTrigger")]
public static void Run(
[KafkaTrigger("BrokerList",
"topic",
Username = "ConfluentCloudUserName",
Password = "ConfluentCloudPassword",
Protocol = BrokerProtocol.SaslSsl,
AuthenticationMode = BrokerAuthenticationMode.Plain,
ConsumerGroup = "$Default")] KafkaEventData<string> kevent, ILogger log)
{
log.LogInformation($"C# Kafka trigger function processed a message: {kevent.Value}");
}
To receive events in a batch, use an input string or KafkaEventData as an array, as shown in the following
example:
[FunctionName("KafkaTriggerMany")]
public static void Run(
[KafkaTrigger("BrokerList",
"topic",
Username = "ConfluentCloudUserName",
Password = "ConfluentCloudPassword",
Protocol = BrokerProtocol.SaslSsl,
AuthenticationMode = BrokerAuthenticationMode.Plain,
ConsumerGroup = "$Default")] KafkaEventData<string>[] events, ILogger log)
{
foreach (KafkaEventData<string> kevent in events)
{
log.LogInformation($"C# Kafka trigger function processed a message: {kevent.Value}");
}
}
The following function logs the message and headers for the Kafka Event:
[FunctionName("KafkaTriggerSingleWithHeaders")]
public static void Run(
[KafkaTrigger("BrokerList",
"topic",
Username = "ConfluentCloudUserName",
Password = "ConfluentCloudPassword",
Protocol = BrokerProtocol.SaslSsl,
AuthenticationMode = BrokerAuthenticationMode.Plain,
ConsumerGroup = "$Default")] KafkaEventData<string> kevent, ILogger log)
{
log.LogInformation($"C# Kafka trigger function processed a message: {kevent.Value}");
log.LogInformation("Headers: ");
var headers = kevent.Headers;
foreach (var header in headers)
{
log.LogInformation($"Key = {header.Key} Value =
{System.Text.Encoding.UTF8.GetString(header.Value)}");
}
}
You can define a generic Avro schema for the event passed to the trigger. The following string value defines the
generic Avro schema:
You can define a specific Avro schema for the event passed to the trigger. The following defines the UserRecord
class:
[FunctionName(nameof(User))]
public static void User(
[KafkaTrigger("LocalBroker", "users", ConsumerGroup = "azfunc")] KafkaEventData<string, UserRecord>[]
kafkaEvents,
ILogger logger)
{
foreach (var kafkaEvent in kafkaEvents)
{
logger.LogInformation($"{JsonConvert.SerializeObject(kafkaEvent.Value)}");
}
}
For a complete set of working .NET examples, see the Kafka extension repository.
NOTE
For an equivalent set of TypeScript examples, see the Kafka extension repository
The specific properties of the function.json file depend on your event provider, which in these examples are
either Confluent or Azure Event Hubs. The following examples show a Kafka trigger for a function that reads and
logs a Kafka message.
The following function.json defines the trigger for the specific provider:
Confluent
Event Hubs
{
"bindings": [
{
"type": "kafkaTrigger",
"name": "event",
"direction": "in",
"topic": "topic",
"brokerList": "%BrokerList%",
"username": "%ConfluentCloudUserName%",
"password": "%ConfluentCloudPassword%",
"protocol": "saslSsl",
"authenticationMode": "plain",
"consumerGroup" : "$Default",
"dataType": "string"
}
]
}
To receive events in a batch, set the cardinality value to many in the function.json file, as shown in the
following examples:
Confluent
Event Hubs
{
"bindings": [
{
"type": "kafkaTrigger",
"name": "event",
"direction": "in",
"protocol" : "SASLSSL",
"password" : "%ConfluentCloudPassword%",
"dataType" : "string",
"topic" : "topic",
"authenticationMode" : "PLAIN",
"cardinality" : "MANY",
"consumerGroup" : "$Default",
"username" : "%ConfluentCloudUserName%",
"brokerList" : "%BrokerList%"
}
]
}
The following code then parses the array of events and logs the event data:
You can define a generic Avro schema for the event passed to the trigger. The following function.json defines the
trigger for the specific provider with a generic Avro schema:
Confluent
Event Hubs
{
"bindings" : [ {
"type" : "kafkaTrigger",
"direction" : "in",
"name" : "kafkaAvroGenericSingle",
"protocol" : "SASLSSL",
"password" : "ConfluentCloudPassword",
"topic" : "topic",
"avroSchema" : "
{\"type\":\"record\",\"name\":\"Payment\",\"namespace\":\"io.confluent.examples.clients.basicavro\",\"fields
\":[{\"name\":\"id\",\"type\":\"string\"},{\"name\":\"amount\",\"type\":\"double\"},
{\"name\":\"type\",\"type\":\"string\"}]}",
"authenticationMode" : "PLAIN",
"consumerGroup" : "$Default",
"username" : "ConfluentCloudUsername",
"brokerList" : "%BrokerList%"
} ]
}
For a complete set of working JavaScript examples, see the Kafka extension repository.
The specific properties of the function.json file depend on your event provider, which in these examples are
either Confluent or Azure Event Hubs. The following examples show a Kafka trigger for a function that reads and
logs a Kafka message.
The following function.json defines the trigger for the specific provider:
Confluent
Event Hubs
{
"bindings": [
{
"type": "kafkaTrigger",
"name": "kafkaEvent",
"direction": "in",
"protocol" : "SASLSSL",
"password" : "%ConfluentCloudPassword%",
"dataType" : "string",
"topic" : "topic",
"authenticationMode" : "PLAIN",
"consumerGroup" : "$Default",
"username" : "%ConfluentCloudUserName%",
"brokerList" : "%BrokerList%",
"sslCaLocation": "confluent_cloud_cacert.pem"
}
]
}
param($kafkaEvent, $TriggerMetadata)
To receive events in a batch, set the cardinality value to many in the function.json file, as shown in the
following examples:
Confluent
Event Hubs
{
"bindings": [
{
"type": "kafkaTrigger",
"name": "kafkaEvent",
"direction": "in",
"protocol" : "SASLSSL",
"password" : "%ConfluentCloudPassword%",
"dataType" : "string",
"topic" : "topic",
"authenticationMode" : "PLAIN",
"cardinality" : "MANY",
"consumerGroup" : "$Default",
"username" : "%ConfluentCloudUserName%",
"brokerList" : "%BrokerList%",
"sslCaLocation": "confluent_cloud_cacert.pem"
}
]
}
The following code then parses the array of events and logs the event data:
param($kafkaEvents, $TriggerMetadata)
$kafkaEvents
foreach ($kafkaEvent in $kafkaEvents) {
$event = $kafkaEvent | ConvertFrom-Json -AsHashtable
Write-Output "Powershell Kafka trigger function called for message $event.Value"
}
param($kafkaEvents, $TriggerMetadata)
Confluent
Event Hubs
{
"bindings" : [ {
"type" : "kafkaTrigger",
"direction" : "in",
"name" : "kafkaEvent",
"protocol" : "SASLSSL",
"password" : "ConfluentCloudPassword",
"topic" : "topic",
"authenticationMode" : "PLAIN",
"avroSchema" : "
{\"type\":\"record\",\"name\":\"Payment\",\"namespace\":\"io.confluent.examples.clients.basicavro\",\"fields
\":[{\"name\":\"id\",\"type\":\"string\"},{\"name\":\"amount\",\"type\":\"double\"},
{\"name\":\"type\",\"type\":\"string\"}]}",
"consumerGroup" : "$Default",
"username" : "ConfluentCloudUsername",
"brokerList" : "%BrokerList%"
} ]
}
param($kafkaEvent, $TriggerMetadata)
For a complete set of working PowerShell examples, see the Kafka extension repository.
The specific properties of the function.json file depend on your event provider, which in these examples are
either Confluent or Azure Event Hubs. The following examples show a Kafka trigger for a function that reads and
logs a Kafka message.
The following function.json defines the trigger for the specific provider:
Confluent
Event Hubs
{
"scriptFile": "main.py",
"bindings": [
{
"type": "kafkaTrigger",
"name": "kevent",
"topic": "topic",
"brokerList": "%BrokerList%",
"username": "%ConfluentCloudUserName%",
"password": "%ConfluentCloudPassword%",
"consumerGroup" : "functions",
"protocol": "saslSsl",
"authenticationMode": "plain"
}
]
}
The following code then runs when the function is triggered:
import logging
from azure.functions import KafkaEvent
To receive events in a batch, set the cardinality value to many in the function.json file, as shown in the
following examples:
Confluent
Event Hubs
{
"scriptFile": "main.py",
"bindings": [
{
"type" : "kafkaTrigger",
"direction": "in",
"name" : "kevents",
"protocol" : "SASLSSL",
"password" : "%ConfluentCloudPassword%",
"topic" : "message_python",
"authenticationMode" : "PLAIN",
"cardinality" : "MANY",
"dataType": "string",
"consumerGroup" : "$Default",
"username" : "%ConfluentCloudUserName%",
"BrokerList" : "%BrokerList%"
}
]
}
The following code then parses the array of events and logs the event data:
import logging
import typing
from azure.functions import KafkaEvent
You can define a generic Avro schema for the event passed to the trigger. The following function.json defines the
trigger for the specific provider with a generic Avro schema:
Confluent
Event Hubs
{
"scriptFile": "main.py",
"bindings" : [ {
"type" : "kafkaTrigger",
"direction" : "in",
"name" : "kafkaTriggerAvroGeneric",
"protocol" : "SASLSSL",
"password" : "ConfluentCloudPassword",
"topic" : "topic",
"authenticationMode" : "PLAIN",
"avroSchema" : "
{\"type\":\"record\",\"name\":\"Payment\",\"namespace\":\"io.confluent.examples.clients.basicavro\",\"fields
\":[{\"name\":\"id\",\"type\":\"string\"},{\"name\":\"amount\",\"type\":\"double\"},
{\"name\":\"type\",\"type\":\"string\"}]}",
"consumerGroup" : "$Default",
"username" : "ConfluentCloudUsername",
"brokerList" : "%BrokerList%"
} ]
}
import logging
from azure.functions import KafkaEvent
For a complete set of working Python examples, see the Kafka extension repository.
The annotations you use to configure your trigger depend on the specific event provider.
Confluent
Event Hubs
The following example shows a Java function that reads and logs the content of the Kafka event:
@FunctionName("KafkaTrigger")
public void runSingle(
@KafkaTrigger(
name = "KafkaTrigger",
topic = "topic",
brokerList="%BrokerList%",
consumerGroup="$Default",
username = "%ConfluentCloudUsername%",
password = "ConfluentCloudPassword",
authenticationMode = BrokerAuthenticationMode.PLAIN,
protocol = BrokerProtocol.SASLSSL,
// sslCaLocation = "confluent_cloud_cacert.pem", // Enable this line for windows.
dataType = "string"
) String kafkaEventData,
final ExecutionContext context) {
context.getLogger().info(kafkaEventData);
}
To receive events in a batch, use an input string as an array, as shown in the following example:
@FunctionName("KafkaTriggerMany")
public void runMany(
@KafkaTrigger(
name = "kafkaTriggerMany",
topic = "topic",
brokerList="%BrokerList%",
consumerGroup="$Default",
username = "%ConfluentCloudUsername%",
password = "ConfluentCloudPassword",
authenticationMode = BrokerAuthenticationMode.PLAIN,
protocol = BrokerProtocol.SASLSSL,
// sslCaLocation = "confluent_cloud_cacert.pem", // Enable this line for windows.
cardinality = Cardinality.MANY,
dataType = "string"
) String[] kafkaEvents,
final ExecutionContext context) {
for (String kevent: kafkaEvents) {
context.getLogger().info(kevent);
}
}
The following function logs the message and headers for the Kafka Event:
@FunctionName("KafkaTriggerManyWithHeaders")
public void runSingle(
@KafkaTrigger(
name = "KafkaTrigger",
topic = "topic",
brokerList="%BrokerList%",
consumerGroup="$Default",
username = "%ConfluentCloudUsername%",
password = "ConfluentCloudPassword",
authenticationMode = BrokerAuthenticationMode.PLAIN,
protocol = BrokerProtocol.SASLSSL,
// sslCaLocation = "confluent_cloud_cacert.pem", // Enable this line for windows.
dataType = "string",
cardinality = Cardinality.MANY
) List<String> kafkaEvents,
final ExecutionContext context) {
Gson gson = new Gson();
for (String keventstr: kafkaEvents) {
KafkaEntity kevent = gson.fromJson(keventstr, KafkaEntity.class);
context.getLogger().info("Java Kafka trigger function called for message: " + kevent.Value);
context.getLogger().info("Headers for the message:");
for (KafkaHeaders header : kevent.Headers) {
String decodedValue = new String(Base64.getDecoder().decode(header.Value));
context.getLogger().info("Key:" + header.Key + " Value:" + decodedValue);
}
}
}
You can define a generic Avro schema for the event passed to the trigger. The following function defines a
trigger for the specific provider with a generic Avro schema:
@FunctionName("KafkaAvroGenericTrigger")
public void runOne(
@KafkaTrigger(
name = "kafkaAvroGenericSingle",
topic = "topic",
brokerList="%BrokerList%",
consumerGroup="$Default",
username = "ConfluentCloudUsername",
password = "ConfluentCloudPassword",
avroSchema = schema,
authenticationMode = BrokerAuthenticationMode.PLAIN,
protocol = BrokerProtocol.SASLSSL) Payment payment,
final ExecutionContext context) {
context.getLogger().info(payment.toString());
}
For a complete set of working Java examples for Confluent, see the Kafka extension repository.
Attributes
Both in-process and isolated process C# libraries use the KafkaTriggerAttribute to define the function trigger.
The following table explains the properties you can set using this trigger attribute:
PA RA M ET ER DESC RIP T IO N
Annotations
The KafkaTrigger annotation allows you to create a function that runs when a topic is received. Supported
options include the following elements:
EL EM EN T DESC RIP T IO N
Configuration
The following table explains the binding configuration properties that you set in the function.json file.
Kafka events are passed to the function as KafkaEventData<string> objects or arrays. Strings and string arrays
that are JSON payloads are also supported.
Kafka messages are passed to the function as strings and string arrays that are JSON payloads.
In a Premium plan, you must enable runtime scale monitoring for the Kafka output to be able to scale out to
multiple instances. To learn more, see Enable runtime scaling.
For a complete set of supported host.json settings for the Kafka trigger, see host.json settings.
Connections
All connection information required by your triggers and bindings should be maintained in application settings
and not in the binding definitions in your code. This is true for credentials, which should never be stored in your
code.
IMPORTANT
Credential settings must reference an application setting. Don't hard-code credentials in your code or configuration files.
When running locally, use the local.settings.json file for your credentials, and don't publish the local.settings.json file.
Confluent
Event Hubs
When connecting to a managed Kafka cluster provided by Confluent in Azure, make sure that the following
authentication credentials for your Confluent Cloud environment are set in your trigger or binding:
The string values you use for these settings must be present as application settings in Azure or in the Values
collection in the local.settings.json file during local development.
You should also set the Protocol , AuthenticationMode , and SslCaLocation in your binding definitions.
Next steps
Write to an Apache Kafka stream from a function
Apache Kafka output binding for Azure Functions
8/2/2022 • 23 minutes to read • Edit Online
The output binding allows an Azure Functions app to write messages to a Kafka topic.
IMPORTANT
Kafka bindings are only available for Functions on the Elastic Premium Plan and Dedicated (App Service) plan. They are
only supported on version 3.x and later version of the Functions runtime.
Example
The usage of the binding depends on the C# modality used in your function app, which can be one of the
following:
In-process
Isolated process
An in-process class library is a compiled C# function runs in the same process as the Functions runtime.
The attributes you use depend on the specific event provider.
Confluent
Event Hubs
The following example shows a C# function that sends a single message to a Kafka topic, using data provided in
HTTP GET request.
[FunctionName("KafkaOutput")]
public static IActionResult Output(
[HttpTrigger(AuthorizationLevel.Anonymous, "get", Route = null)] HttpRequest req,
[Kafka("BrokerList",
"topic",
Username = "ConfluentCloudUserName",
Password = "ConfluentCloudPassword",
Protocol = BrokerProtocol.SaslSsl,
AuthenticationMode = BrokerAuthenticationMode.Plain
)] out string eventData,
ILogger log)
{
log.LogInformation("C# HTTP trigger function processed a request.");
To send events in a batch, use an array of KafkaEventData objects, as shown in the following example:
[FunctionName("KafkaOutputMany")]
public static IActionResult Output(
[HttpTrigger(AuthorizationLevel.Anonymous, "get", Route = null)] HttpRequest req,
[Kafka("BrokerList",
"topic",
Username = "ConfluentCloudUserName",
Password = "ConfluentCloudPassword",
Protocol = BrokerProtocol.SaslSsl,
AuthenticationMode = BrokerAuthenticationMode.Plain
)] out KafkaEventData<string>[] eventDataArr,
ILogger log)
{
log.LogInformation("C# HTTP trigger function processed a request.");
eventDataArr = new KafkaEventData<string>[2];
eventDataArr[0] = new KafkaEventData<string>("one");
eventDataArr[1] = new KafkaEventData<string>("two");
return new OkObjectResult("Ok");
}
}
{
[FunctionName("KafkaOutputWithHeaders")]
public static IActionResult Output(
[HttpTrigger(AuthorizationLevel.Anonymous, "get", Route = null)] HttpRequest req,
[Kafka("BrokerList",
"topic",
Username = "ConfluentCloudUserName",
Password = "ConfluentCloudPassword",
Protocol = BrokerProtocol.SaslSsl,
AuthenticationMode = BrokerAuthenticationMode.Plain
)] out KafkaEventData<string> eventData,
ILogger log)
{
log.LogInformation("C# HTTP trigger function processed a request.");
For a complete set of working .NET examples, see the Kafka extension repository.
NOTE
For an equivalent set of TypeScript examples, see the Kafka extension repository
The specific properties of the function.json file depend on your event provider, which in these examples are
either Confluent or Azure Event Hubs. The following examples show a Kafka output binding for a function that is
triggered by an HTTP request and sends data from the request to the Kafka topic.
The following function.json defines the trigger for the specific provider in these examples:
Confluent
Event Hubs
{
"bindings": [
{
"authLevel": "function",
"type": "httpTrigger",
"direction": "in",
"name": "req",
"methods": [
"get"
]
},
{
"type": "kafka",
"name": "outputKafkaMessage",
"brokerList": "BrokerList",
"topic": "topic",
"username": "ConfluentCloudUsername",
"password": "ConfluentCloudPassword",
"protocol": "SASLSSL",
"authenticationMode": "PLAIN",
"direction": "out"
},
{
"type": "http",
"direction": "out",
"name": "res"
}
]
}
// This sample will create topic "topic" and send message to it.
// KafkaTrigger will be trigged.
module.exports = async function (context, req) {
context.log('JavaScript HTTP trigger function processed a request.');
The following code sends multiple messages as an array to the same topic:
// This sample will create topic "topic" and send message to it.
// KafkaTrigger will be trigged.
module.exports = async function (context, req) {
context.log('JavaScript HTTP trigger function processed a request.');
The following example shows how to send an event message with headers to the same Kafka topic:
// This sample will create topic "topic" and send message to it.
// KafkaTrigger will be trigged.
module.exports = async function (context, req) {
context.log('JavaScript HTTP trigger function processed a request.');
For a complete set of working JavaScript examples, see the Kafka extension repository.
The specific properties of the function.json file depend on your event provider, which in these examples are
either Confluent or Azure Event Hubs. The following examples show a Kafka output binding for a function that is
triggered by an HTTP request and sends data from the request to the Kafka topic.
The following function.json defines the trigger for the specific provider in these examples:
Confluent
Event Hubs
{
"bindings": [
{
"authLevel": "function",
"type": "httpTrigger",
"direction": "in",
"name": "Request",
"methods": [
"get"
]
},
{
"type": "kafka",
"name": "outputMessage",
"brokerList": "BrokerList",
"topic": "topic",
"username" : "%ConfluentCloudUserName%",
"password" : "%ConfluentCloudPassword%",
"protocol": "SASLSSL",
"authenticationMode": "PLAIN",
"direction": "out"
},
{
"type": "http",
"direction": "out",
"name": "Response"
}
]
}
$message
The following code sends multiple messages as an array to the same topic:
The following example shows how to send an event message with headers to the same Kafka topic:
using namespace System.Net
$kevent = @{
Offset = 364
Partition = 0
Topic = "kafkaeventhubtest1"
Timestamp = "2022-04-09T03:20:06.591Z"
Value = $message
Headers= @(@{
Key= "test"
Value= "powershell"
}
)
}
For a complete set of working PowerShell examples, see the Kafka extension repository.
The specific properties of the function.json file depend on your event provider, which in these examples are
either Confluent or Azure Event Hubs. The following examples show a Kafka output binding for a function that is
triggered by an HTTP request and sends data from the request to the Kafka topic.
The following function.json defines the trigger for the specific provider in these examples:
Confluent
Event Hubs
{
"scriptFile": "main.py",
"bindings": [
{
"authLevel": "function",
"type": "httpTrigger",
"direction": "in",
"name": "req",
"methods": [
"get"
]
},
{
"type": "kafka",
"direction": "out",
"name": "outputMessage",
"brokerList": "BrokerList",
"topic": "topic",
"username": "%ConfluentCloudUserName%",
"password": "%ConfluentCloudPassword%",
"protocol": "SASLSSL",
"authenticationMode": "PLAIN"
},
{
"type": "http",
"direction": "out",
"name": "$return"
}
]
}
import logging
The following code sends multiple messages as an array to the same topic:
import logging
import typing
from azure.functions import Out, HttpRequest, HttpResponse
import json
The following example shows how to send an event message with headers to the same Kafka topic:
import logging
For a complete set of working Python examples, see the Kafka extension repository.
The annotations you use to configure the output binding depend on the specific event provider.
Confluent
Event Hubs
@FunctionName("KafkaOutput")
public HttpResponseMessage run(
@HttpTrigger(name = "req", methods = {HttpMethod.GET, HttpMethod.POST}, authLevel =
AuthorizationLevel.ANONYMOUS) HttpRequestMessage<Optional<String>> request,
@KafkaOutput(
name = "kafkaOutput",
topic = "topic",
brokerList="%BrokerList%",
username = "%ConfluentCloudUsername%",
password = "ConfluentCloudPassword",
authenticationMode = BrokerAuthenticationMode.PLAIN,
// sslCaLocation = "confluent_cloud_cacert.pem", // Enable this line for windows.
protocol = BrokerProtocol.SASLSSL
) OutputBinding<String> output,
final ExecutionContext context) {
context.getLogger().info("Java HTTP trigger processed a request.");
The following example shows how to send multiple messages to a Kafka topic.
@FunctionName("KafkaOutputMany")
public HttpResponseMessage run(
@HttpTrigger(name = "req", methods = {HttpMethod.GET, HttpMethod.POST}, authLevel =
AuthorizationLevel.ANONYMOUS) HttpRequestMessage<Optional<String>> request,
@KafkaOutput(
name = "kafkaOutput",
topic = "topic",
brokerList="%BrokerList%",
username = "%ConfluentCloudUsername%",
password = "ConfluentCloudPassword",
authenticationMode = BrokerAuthenticationMode.PLAIN,
// sslCaLocation = "confluent_cloud_cacert.pem", // Enable this line for windows.
protocol = BrokerProtocol.SASLSSL
) OutputBinding<String[]> output,
final ExecutionContext context) {
context.getLogger().info("Java HTTP trigger processed a request.");
String[] messages = new String[2];
messages[0] = "one";
messages[1] = "two";
output.setValue(messages);
return request.createResponseBuilder(HttpStatus.OK).body("Ok").build();
}
public KafkaEntity(int Offset, int Partition, String Topic, String Timestamp, String
Value,KafkaHeaders[] headers) {
this.Offset = Offset;
this.Partition = Partition;
this.Topic = Topic;
this.Timestamp = Timestamp;
this.Value = Value;
this.Headers = headers;
}
The following example function sends a message with headers to a Kafka topic.
@FunctionName("KafkaOutputWithHeaders")
public HttpResponseMessage run(
@HttpTrigger(name = "req", methods = {HttpMethod.GET, HttpMethod.POST}, authLevel =
AuthorizationLevel.ANONYMOUS) HttpRequestMessage<Optional<String>> request,
@KafkaOutput(
name = "kafkaOutput",
topic = "topic",
brokerList="%BrokerList%",
username = "%ConfluentCloudUsername%",
password = "ConfluentCloudPassword",
authenticationMode = BrokerAuthenticationMode.PLAIN,
// sslCaLocation = "confluent_cloud_cacert.pem", // Enable this line for windows.
protocol = BrokerProtocol.SASLSSL
) OutputBinding<KafkaEntity> output,
final ExecutionContext context) {
context.getLogger().info("Java HTTP trigger processed a request.");
For a complete set of working Java examples for Confluent, see the Kafka extension repository.
Attributes
Both in-process and isolated process C# libraries use the Kafka attribute to define the function trigger.
The following table explains the properties you can set using this attribute:
PA RA M ET ER DESC RIP T IO N
Annotations
The KafkaOutput annotation allows you to create a function that writes to a specific topic. Supported options
include the following elements:
EL EM EN T DESC RIP T IO N
name The name of the variable that represents the brokered data
in function code.
Configuration
The following table explains the binding configuration properties that you set in the function.json file.
name The name of the variable that represents the brokered data
in function code.
Usage
Both keys and values types are supported with built-in Avro and Protobuf serialization.
The offset, partition, and timestamp for the event are generated at runtime. Only value and headers can be set
inside the function. Topic is set in the function.json.
Please make sure to have access to the Kafka topic to which you are trying to write. You configure the binding
with access and connection credentials to the Kafka topic.
In a Premium plan, you must enable runtime scale monitoring for the Kafka output to be able to scale out to
multiple instances. To learn more, see Enable runtime scaling.
For a complete set of supported host.json settings for the Kafka trigger, see host.json settings.
Connections
All connection information required by your triggers and bindings should be maintained in application settings
and not in the binding definitions in your code. This is true for credentials, which should never be stored in your
code.
IMPORTANT
Credential settings must reference an application setting. Don't hard-code credentials in your code or configuration files.
When running locally, use the local.settings.json file for your credentials, and don't publish the local.settings.json file.
Confluent
Event Hubs
When connecting to a managed Kafka cluster provided by Confluent in Azure, make sure that the following
authentication credentials for your Confluent Cloud environment are set in your trigger or binding:
The string values you use for these settings must be present as application settings in Azure or in the Values
collection in the local.settings.json file during local development.
You should also set the Protocol , AuthenticationMode , and SslCaLocation in your binding definitions.
Next steps
Run a function from an Apache Kafka event stream
Azure Functions HTTP triggers and bindings
overview
8/2/2022 • 4 minutes to read • Edit Online
Azure Functions may be invoked via HTTP requests to build serverless APIs and respond to webhooks.
A C T IO N TYPE
Install extension
The extension NuGet package you install depends on the C# mode you're using in your function app:
In-process
Isolated process
C# script
Functions execute in the same process as the Functions host. To learn more, see Develop C# class library
functions using Azure Functions.
The functionality of the extension varies depending on the extension version:
Functions v2.x+
Functions v1.x
Add the extension to your project by installing the NuGet package, version 3.x.
Install bundle
Starting with Functions version 2.x, the HTTP extension is part of an extension bundle, which is specified in your
host.json project file. To learn more, see extension bundle.
Bundle v2.x
Functions 1.x
This version of the extension should already be available to your function app with extension bundle, version 2.x.
host.json settings
This section describes the configuration settings available for this binding in versions 2.x and higher. Settings in
the host.json file apply to all functions in a function app instance. The example host.json file below contains only
the version 2.x+ settings for this binding. For more information about function app configuration settings in
versions 2.x and later versions, see host.json reference for Azure Functions.
NOTE
For a reference of host.json in Functions 1.x, see host.json reference for Azure Functions 1.x.
{
"extensions": {
"http": {
"routePrefix": "api",
"maxOutstandingRequests": 200,
"maxConcurrentRequests": 100,
"dynamicThrottlesEnabled": true,
"hsts": {
"isEnabled": true,
"maxAge": "10"
},
"customHeaders": {
"X-Content-Type-Options": "nosniff"
}
}
}
}
Next steps
Run a function from an HTTP request
Return an HTTP response from a function
Azure Functions HTTP trigger
8/2/2022 • 25 minutes to read • Edit Online
The HTTP trigger lets you invoke a function with an HTTP request. You can use an HTTP trigger to build
serverless APIs and respond to webhooks.
The default return value for an HTTP-triggered function is:
HTTP 204 No Content with an empty body in Functions 2.x and higher
HTTP 200 OK with an empty body in Functions 1.x
To modify the HTTP response, configure an output binding.
For more information about HTTP bindings, see the overview and output binding reference.
TIP
If you plan to use the HTTP or WebHook bindings, plan to avoid port exhaustion that can be caused by improper
instantiation of HttpClient . For more information, see How to manage connections in Azure Functions.
Example
A C# function can be created using one of the following C# modes:
In-process class library: compiled C# function that runs in the same process as the Functions runtime.
Isolated process class library: compiled C# function that runs in a process isolated from the runtime. Isolated
process is required to support C# functions running on .NET 5.0.
C# script: used primarily when creating C# functions in the Azure portal.
The code in this article defaults to .NET Core syntax, used in Functions version 2.x and higher. For information on
the 1.x syntax, see the 1.x functions templates.
In-process
Isolated process
C# Script
The following example shows a C# function that looks for a name parameter either in the query string or the
body of the HTTP request. Notice that the return value is used for the output binding, but a return value attribute
isn't required.
[FunctionName("HttpTriggerCSharp")]
public static async Task<IActionResult> Run(
[HttpTrigger(AuthorizationLevel.Function, "get", "post", Route = null)]
HttpRequest req, ILogger log)
{
log.LogInformation("C# HTTP trigger function processed a request.");
// Item list
context.getLogger().info("GET parameters are: " + request.getQueryParameters());
// Item list
context.getLogger().info("Request body is: " + request.getBody().orElse(""));
// Item list
context.getLogger().info("Route parameters are: " + id);
@Override
public String toString() {
return "ToDoItem={id=" + id + ",description=" + description + "}";
}
}
This example reads the body of a POST request. The request body gets automatically de-serialized into a
ToDoItem object, and is returned to the client, with content type application/json . The ToDoItem parameter is
serialized by the Functions runtime as it is assigned to the body property of the HttpMessageResponse.Builder
class.
@FunctionName("TriggerPojoPost")
public HttpResponseMessage run(
@HttpTrigger(name = "req",
methods = {HttpMethod.POST},
authLevel = AuthorizationLevel.ANONYMOUS)
HttpRequestMessage<Optional<ToDoItem>> request,
final ExecutionContext context) {
// Item list
context.getLogger().info("Request body is: " + request.getBody().orElse(null));
The following example shows a trigger binding in a function.json file and a JavaScript function that uses the
binding. The function looks for a name parameter either in the query string or the body of the HTTP request.
Here's the function.json file:
{
"disabled": false,
"bindings": [
{
"authLevel": "function",
"type": "httpTrigger",
"direction": "in",
"name": "req"
},
{
"type": "http",
"direction": "out",
"name": "res"
}
]
}
The following example shows a trigger binding in a function.json file and a PowerShell function. The function
looks for a name parameter either in the query string or the body of the HTTP request.
{
"bindings": [
{
"authLevel": "function",
"type": "httpTrigger",
"direction": "in",
"name": "Request",
"methods": [
"get",
"post"
]
},
{
"type": "http",
"direction": "out",
"name": "Response"
}
]
}
using namespace System.Net
$body = "This HTTP triggered function executed successfully. Pass a name in the query string or in the
request body for a personalized response."
if ($name) {
$body = "Hello, $name. This HTTP triggered function executed successfully."
}
The following example shows a trigger binding in a function.json file and a Python function that uses the
binding. The function looks for a name parameter either in the query string or the body of the HTTP request.
Here's the function.json file:
{
"scriptFile": "__init__.py",
"disabled": false,
"bindings": [
{
"authLevel": "function",
"type": "httpTrigger",
"direction": "in",
"name": "req"
},
{
"type": "http",
"direction": "out",
"name": "$return"
}
]
}
name = req.params.get('name')
if not name:
try:
req_body = req.get_json()
except ValueError:
pass
else:
name = req_body.get('name')
if name:
return func.HttpResponse(f"Hello {name}!")
else:
return func.HttpResponse(
"Please pass a name on the query string or in the request body",
status_code=400
)
Attributes
Both in-process and isolated process C# libraries use the HttpTriggerAttribute to define the trigger binding. C#
script instead uses a function.json configuration file.
In-process
Isolated process
C# Script
Annotations
In the Java functions runtime library, use the HttpTrigger annotation, which supports the following settings:
authLevel
dataType
methods
name
route
Configuration
The following table explains the trigger configuration properties that you set in the function.json file, which
differs by runtime version.
Functions 2.x+
Functions 1.x
The following table explains the binding configuration properties that you set in the function.json file.
name Required - the variable name used in function code for the
request or request body.
Usage
This section details how to configure your HTTP trigger function binding.
The HttpTrigger annotation should be applied to a method parameter of one of the following types:
HttpRequestMessage<T>.
Any native Java types such as int, String, byte[].
Nullable values using Optional.
Any plain-old Java object (POJO) type.
Payload
The trigger input type is declared as either HttpRequest or a custom type. If you choose HttpRequest , you get
full access to the request object. For a custom type, the runtime tries to parse the JSON request body to set the
object properties.
Customize the HTTP endpoint
By default when you create a function for an HTTP trigger, the function is addressable with a route of the form:
http://<APP_NAME>.azurewebsites.net/api/<FUNCTION_NAME>
You can customize this route using the optional route property on the HTTP trigger's input binding. You can
use any Web API Route Constraint with your parameters.
In-process
Isolated process
C# Script
The following C# function code accepts two parameters category and id in the route and writes a response
using both parameters.
[FunctionName("Function1")]
public static IActionResult Run(
[HttpTrigger(AuthorizationLevel.Anonymous, "get", "post", Route = "products/{category:alpha}/{id:int?}")]
HttpRequest req,
string category, int? id, ILogger log)
{
log.LogInformation("C# HTTP trigger function processed a request.");
Route parameters are defined using the route setting of the HttpTrigger annotation. The following function
code accepts two parameters category and id in the route and writes a response using both parameters.
package com.function;
import java.util.*;
import com.microsoft.azure.functions.annotation.*;
import com.microsoft.azure.functions.*;
As an example, the following function.json file defines a route property for an HTTP trigger with two
parameters, category and id :
{
"bindings": [
{
"type": "httpTrigger",
"name": "req",
"direction": "in",
"methods": [ "get" ],
"route": "products/{category:alpha}/{id:int?}"
},
{
"type": "http",
"name": "res",
"direction": "out"
}
]
}
The Functions runtime provides the request body from the context object. The following example shows how
to read route parameters from context.bindingData .
context.res = {
body: message;
}
}
Route parameters declared in the function.json file are accessible as a property of the $Request.Params object.
$Category = $Request.Params.category
$Id = $Request.Params.id
The function execution context is exposed via a parameter declared as func.HttpRequest . This instance allows a
function to access data route parameters, query string values and methods that allow you to return HTTP
responses.
Once defined, the route parameters are available to the function by calling the route_params method.
import logging
category = req.route_params.get('category')
id = req.route_params.get('id')
message = f"Category: {category}, ID: {id}"
return func.HttpResponse(message)
Using this configuration, the function is now addressable with the following route instead of the original route.
http://<APP_NAME>.azurewebsites.net/api/products/electronics/357
This configuration allows the function code to support two parameters in the address, category and id. For more
information on how route parameters are tokenized in a URL, see Routing in ASP.NET Core.
By default, all function routes are prefixed with api. You can also customize or remove the prefix using the
extensions.http.routePrefix property in your host.json file. The following example removes the api route prefix
by using an empty string for the prefix in the host.json file.
{
"extensions": {
"http": {
"routePrefix": ""
}
}
}
{
"type": "table",
"direction": "in",
"name": "product",
"partitionKey": "products",
"tableName": "products",
"rowKey": "{id}"
}
When you use route parameters, an invoke_URL_template is automatically created for your function. Your clients
can use the URL template to understand the parameters they need to pass in the URL when calling your function
using its URL. Navigate to one of your HTTP-triggered functions in the Azure portal and select Get function
URL .
You can programmatically access the invoke_URL_template by using the Azure Resource Manager APIs for List
Functions or Get Function.
Working with client identities
If your function app is using App Service Authentication / Authorization, you can view information about
authenticated clients from your code. This information is available as request headers injected by the platform.
You can also read this information from binding data. This capability is only available to the Functions runtime in
2.x and higher. It is also currently only available for .NET languages.
Information regarding authenticated clients is available as a ClaimsPrincipal, which is available as part of the
request context as shown in the following example:
In-process
Isolated process
C# Script
using System.Net;
using Microsoft.AspNetCore.Mvc;
using System.Security.Claims;
Alternatively, the ClaimsPrincipal can simply be included as an additional parameter in the function signature:
using System.Net;
using Microsoft.AspNetCore.Mvc;
using System.Security.Claims;
using Newtonsoft.Json.Linq;
Due to the elevated permissions in your function app granted by the master key, you should not share this key
with third parties or distribute it in native client applications. Use caution when choosing the admin access level.
Obtaining keys
Keys are stored as part of your function app in Azure and are encrypted at rest. To view your keys, create new
ones, or roll keys to new values, navigate to one of your HTTP-triggered functions in the Azure portal and select
Function Keys .
You can also manage host keys. Navigate to the function app in the Azure portal and select App keys .
You can obtain function and host keys programmatically by using the Azure Resource Manager APIs. There are
APIs to List Function Keys and List Host Keys, and when using deployment slots the equivalent APIs are List
Function Keys Slot and List Host Keys Slot.
You can also create new function and host keys programmatically by using the Create Or Update Function
Secret, Create Or Update Function Secret Slot, Create Or Update Host Secret and Create Or Update Host Secret
Slot APIs.
Function and host keys can be deleted programmatically by using the Delete Function Secret, Delete Function
Secret Slot, Delete Host Secret, and Delete Host Secret Slot APIs.
You can also use the legacy key management APIs to obtain function keys, but using the Azure Resource
Manager APIs is recommended instead.
API key authorization
Most HTTP trigger templates require an API key in the request. So your HTTP request normally looks like the
following URL:
https://<APP_NAME>.azurewebsites.net/api/<FUNCTION_NAME>?code=<API_KEY>
The key can be included in a query string variable named code , as above. It can also be included in an
x-functions-key HTTP header. The value of the key can be any function key defined for the function, or any host
key.
You can allow anonymous requests, which do not require keys. You can also require that the master key is used.
You change the default authorization level by using the authLevel property in the binding JSON. For more
information, see Trigger - configuration.
NOTE
When running functions locally, authorization is disabled regardless of the specified authorization level setting. After
publishing to Azure, the authLevel setting in your trigger is enforced. Keys are still required when running locally in a
container.
Azure App Service Environment (ASE) provides a dedicated hosting environment in which to run your functions.
ASE lets you configure a single front-end gateway that you can use to authenticate all incoming requests. For
more information, see Configuring a Web Application Firewall (WAF) for App Service Environment.
Webhooks
NOTE
Webhook mode is only available for version 1.x of the Functions runtime. This change was made to improve the
performance of HTTP triggers in version 2.x and higher.
In version 1.x, webhook templates provide additional validation for webhook payloads. In version 2.x and higher,
the base HTTP trigger still works and is the recommended approach for webhooks.
WebHook type
The webHookType binding property indicates the type if webhook supported by the function, which also dictates
the supported payload. The webhook type can be one of the following values:
T Y P E VA L UE DESC RIP T IO N
When setting the webHookType property, don't also set the methods property on the binding.
GitHub webhooks
To respond to GitHub webhooks, first create your function with an HTTP Trigger, and set the webHookType
property to github . Then copy its URL and API key into the Add webhook page of your GitHub repository.
Slack webhooks
The Slack webhook generates a token for you instead of letting you specify it, so you must configure a function-
specific key with the token from Slack. See Authorization keys.
Webhooks and keys
Webhook authorization is handled by the webhook receiver component, part of the HTTP trigger, and the
mechanism varies based on the webhook type. Each mechanism does rely on a key. By default, the function key
named "default" is used. To use a different key, configure the webhook provider to send the key name with the
request in one of the following ways:
Quer y string : The provider passes the key name in the clientidquery string parameter, such as
https://<APP_NAME>.azurewebsites.net/api/<FUNCTION_NAME>?clientid=<KEY_NAME> .
Request header : The provider passes the key name in the x-functions-clientid header.
Content types
Passing binary and form data to a non-C# function requires that you use the appropriate content-type header.
Supported content types include octet-stream for binary data and multipart types.
Known issues
In non-C# functions, requests sent with the content-type image/jpeg results in a string value passed to the
function. In cases like these, you can manually convert the string value to a byte array to access the raw binary
data.
Limits
The HTTP request length is limited to 100 MB (104,857,600 bytes), and the URL length is limited to 4 KB (4,096
bytes). These limits are specified by the httpRuntime element of the runtime's Web.config file.
If a function that uses the HTTP trigger doesn't complete within 230 seconds, the Azure Load Balancer will time
out and return an HTTP 502 error. The function will continue running but will be unable to return an HTTP
response. For long-running functions, we recommend that you follow async patterns and return a location
where you can ping the status of the request. For information about how long a function can run, see Scale and
hosting - Consumption plan.
Next steps
Return an HTTP response from a function
Azure Functions HTTP output bindings
8/2/2022 • 2 minutes to read • Edit Online
Use the HTTP output binding to respond to the HTTP request sender (HTTP trigger). This binding requires an
HTTP trigger and allows you to customize the response associated with the trigger's request.
The default return value for an HTTP-triggered function is:
HTTP 204 No Content with an empty body in Functions 2.x and higher
HTTP 200 OK with an empty body in Functions 1.x
Attribute
Both in-process and isolated process C# libraries don't require an attribute. C# script uses a function.json
configuration file.
In-process
Isolated process
C# Script
Annotations
In the Java functions runtime library, use the HttpOutput annotation to define an output variable other than the
default variable returned by the function. This annotation supports the following settings:
dataType
name
Configuration
The following table explains the binding configuration properties that you set in the function.json file.
name The variable name used in function code for the response, or
$return to use the return value.
Usage
To send an HTTP response, use the language-standard response patterns.
The response type depends on the C# mode:
In-process
Isolated process
C# Script
Next steps
Run a function from an HTTP request
Mobile Apps bindings for Azure Functions
8/2/2022 • 7 minutes to read • Edit Online
NOTE
Azure Mobile Apps bindings are only available to Azure Functions 1.x. They are not supported in Azure Functions 2.x and
higher.
This article explains how to work with Azure Mobile Apps bindings in Azure Functions. Azure Functions supports
input and output bindings for Mobile Apps.
The Mobile Apps bindings let you read and update data tables in mobile apps.
TO A DD SUP P O RT IN
DEVELO P M EN T EN VIRO N M EN T F UN C T IO N S 1. X
Input
The Mobile Apps input binding loads a record from a mobile table endpoint and passes it into your function. In
C# and F# functions, any changes made to the record are automatically sent back to the table when the function
exits successfully.
Input - example
See the language-specific example:
C# script
JavaScript
The following example shows a Mobile Apps input binding in a function.json file and a C# script function that
uses the binding. The function is triggered by a queue message that has a record identifier. The function reads
the specified record and modifies its Text property.
Here's the binding data in the function.json file:
{
"bindings": [
{
"name": "myQueueItem",
"queueName": "myqueue-items",
"connection": "",
"type": "queueTrigger",
"direction": "in"
},
{
"name": "record",
"type": "mobileTable",
"tableName": "MyTable",
"id": "{queueTrigger}",
"connection": "My_MobileApp_Url",
"apiKey": "My_MobileApp_Key",
"direction": "in"
}
]
}
#r "Newtonsoft.Json"
using Newtonsoft.Json.Linq;
Input - attributes
In C# class libraries, use the MobileTable attribute.
For information about attribute properties that you can configure, see the following configuration section.
Input - configuration
The following table explains the binding configuration properties that you set in the function.json file and the
MobileTable attribute.
When you're developing locally, add your application settings in the local.settings.json file in the Values
collection.
IMPORTANT
Don't share the API key with your mobile app clients. It should only be distributed securely to service-side clients, like
Azure Functions. Azure Functions stores your connection information and API keys as app settings so that they are not
checked into your source control repository. This safeguards your sensitive information.
Input - usage
In C# functions, when the record with the specified ID is found, it is passed into the named JObject parameter.
When the record is not found, the parameter value is null .
In JavaScript functions, the record is passed into the context.bindings.<name> object. When the record is not
found, the parameter value is null .
In C# and F# functions, any changes you make to the input record (input parameter) are automatically sent back
to the table when the function exits successfully. You can't modify a record in JavaScript functions.
Output
Use the Mobile Apps output binding to write a new record to a Mobile Apps table.
Output - example
C#
C# script
JavaScript
The following example shows a C# function that is triggered by a queue message and creates a record in a
mobile app table.
[FunctionName("MobileAppsOutput")]
[return: MobileTable(ApiKeySetting = "MyMobileAppKey", TableName = "MyTable", MobileAppUriSetting =
"MyMobileAppUri")]
public static object Run(
[QueueTrigger("myqueue-items", Connection = "AzureWebJobsStorage")] string myQueueItem,
TraceWriter log)
{
return new { Text = $"I'm running in a C# function! {myQueueItem}" };
}
Output - attributes
In C# class libraries, use the MobileTable attribute.
For information about attribute properties that you can configure, see Output - configuration. Here's a
MobileTable attribute example in a method signature:
C#
[FunctionName("MobileAppsOutput")]
[return: MobileTable(ApiKeySetting = "MyMobileAppKey", TableName = "MyTable", MobileAppUriSetting =
"MyMobileAppUri")]
public static object Run(
[QueueTrigger("myqueue-items", Connection = "AzureWebJobsStorage")] string myQueueItem,
TraceWriter log)
{
...
}
Output - configuration
The following table explains the binding configuration properties that you set in the function.json file and the
MobileTable attribute.
When you're developing locally, add your application settings in the local.settings.json file in the Values
collection.
IMPORTANT
Don't share the API key with your mobile app clients. It should only be distributed securely to service-side clients, like
Azure Functions. Azure Functions stores your connection information and API keys as app settings so that they are not
checked into your source control repository. This safeguards your sensitive information.
Output - usage
In C# script functions, use a named output parameter of type out object to access the output record. In C#
class libraries, the MobileTable attribute can be used with any of the following types:
ICollector<T> or IAsyncCollector<T> , where T is either JObject or any type with a public string Id
property.
out JObject
out T or out T[] , where T is any Type with a public string Id property.
In Node.js functions, use context.bindings.<name> to access the output record.
Next steps
Learn more about Azure functions triggers and bindings
Notification Hubs output binding for Azure
Functions
8/2/2022 • 7 minutes to read • Edit Online
This article explains how to send push notifications by using Azure Notification Hubs bindings in Azure
Functions. Azure Functions supports output bindings for Notification Hubs.
Azure Notification Hubs must be configured for the Platform Notifications Service (PNS) you want to use. To
learn how to get push notifications in your client app from Notification Hubs, see Getting started with
Notification Hubs and select your target client platform from the drop-down list near the top of the page.
IMPORTANT
Google has deprecated Google Cloud Messaging (GCM) in favor of Firebase Cloud Messaging (FCM). This output binding
doesn't support FCM. To send notifications using FCM, use the Firebase API directly in your function or use template
notifications.
TO A DD SUP P O RT IN
DEVELO P M EN T EN VIRO N M EN T F UN C T IO N S 1. X
Example - template
The notifications you send can be native notifications or template notifications. Native notifications target a
specific client platform as configured in the platform property of the output binding. A template notification
can be used to target multiple platforms.
See the language-specific example:
C# script - out parameter
C# script - asynchronous
C# script - JSON
C# script - library types
F#
JavaScript
C# script template example - out parameter
This example sends a notification for a template registration that contains a message placeholder in the
template.
using System;
using System.Threading.Tasks;
using System.Collections.Generic;
public static void Run(string myQueueItem, out IDictionary<string, string> notification, TraceWriter log)
{
log.Info($"C# Queue trigger function processed: {myQueueItem}");
notification = GetTemplateProperties(myQueueItem);
}
using System;
using System.Threading.Tasks;
using System.Collections.Generic;
using System;
public static void Run(string myQueueItem, out string notification, TraceWriter log)
{
log.Info($"C# Queue trigger function processed: {myQueueItem}");
notification = "{\"message\":\"Hello from C#. Processed a queue item!\"}";
}
C# script template example - library types
This example shows how to use types defined in the Microsoft Azure Notification Hubs Library.
#r "Microsoft.Azure.NotificationHubs"
using System;
using System.Threading.Tasks;
using Microsoft.Azure.NotificationHubs;
public static void Run(string myQueueItem, out Notification notification, TraceWriter log)
{
log.Info($"C# Queue trigger function processed: {myQueueItem}");
notification = GetTemplateNotification(myQueueItem);
}
F# template example
This example sends a notification for a template registration that contains location and message .
if (myTimer.IsPastDue)
{
context.log('Node.js is running late!');
}
context.log('Node.js timer trigger function ran!', timeStamp);
context.bindings.notification = {
location: "Redmond",
message: "Hello from Node!"
};
};
using System;
using Microsoft.Azure.NotificationHubs;
using Newtonsoft.Json;
// In this example the queue item is a new user to be processed in the form of a JSON string with
// a "name" value.
//
// The JSON format for a native APNS notification is ...
// { "aps": { "alert": "notification message" }}
using System;
using Microsoft.Azure.NotificationHubs;
using Newtonsoft.Json;
// In this example the queue item is a new user to be processed in the form of a JSON string with
// a "name" value.
//
// The XML format for a native WNS toast notification is ...
// <?xml version="1.0" encoding="utf-8"?>
// <toast>
// <visual>
// <binding template="ToastText01">
// <text id="1">notification message</text>
// </binding>
// </visual>
// </toast>
log.Info($"{wnsNotificationPayload}");
await notification.AddAsync(new WindowsNotification(wnsNotificationPayload));
}
Attributes
In C# class libraries, use the NotificationHub attribute.
The attribute's constructor parameters and properties are described in the configuration section.
Configuration
The following table explains the binding configuration properties that you set in the function.json file and the
NotificationHub attribute:
When you're developing locally, add your application settings in the local.settings.json file in the Values
collection.
function.json file example
Here's an example of a Notification Hubs binding in a function.json file.
{
"bindings": [
{
"type": "notificationHub",
"direction": "out",
"name": "notification",
"tagExpression": "",
"hubName": "my-notification-hub",
"connection": "MyHubConnectionString",
"platform": "apns"
}
],
"disabled": false
}
2. Navigate to your function app in the Azure portal, choose Application settings , add a key such as
MyHubConnectionString , paste the copied DefaultFullSharedAccessSignature for your notification hub as
the value, and then click Save .
The name of this application setting is what goes in the output binding connection setting in function.json or the
.NET attribute. See the Configuration section earlier in this article.
When you're developing locally, add your application settings in the local.settings.json file in the Values
collection.
Next steps
Learn more about Azure functions triggers and bindings
Azure Queue storage trigger and bindings for
Azure Functions overview
8/2/2022 • 7 minutes to read • Edit Online
Azure Functions can run as new Azure Queue storage messages are created and can write queue messages
within a function.
A C T IO N TYPE
Install extension
The extension NuGet package you install depends on the C# mode you're using in your function app:
In-process
Isolated process
C# script
Functions execute in the same process as the Functions host. To learn more, see Develop C# class library
functions using Azure Functions.
The functionality of the extension varies depending on the extension version:
Extension 5.x+
Functions 2.x+
Functions 1.x
This version introduces the ability to connect using an identity instead of a secret. For a tutorial on configuring
your function apps with managed identities, see the creating a function app with identity-based connections
tutorial.
This version allows you to bind to types from Azure.Storage.Queues.
This extension is available by installing the Microsoft.Azure.WebJobs.Extensions.Storage.Queues NuGet package,
version 5.x.
Using the .NET CLI:
Install bundle
The Blob storage binding is part of an extension bundle, which is specified in your host.json project file. You may
need to modify this bundle to change the version of the binding, or if bundles aren't already installed. To learn
more, see extension bundle.
Bundle v3.x
Bundle v2.x
Functions 1.x
This version introduces the ability to connect using an identity instead of a secret. For a tutorial on configuring
your function apps with managed identities, see the creating a function app with identity-based connections
tutorial.
You can add this version of the extension from the preview extension bundle v3 by adding or replacing the
following code in your host.json file:
{
"version": "2.0",
"extensionBundle": {
"id": "Microsoft.Azure.Functions.ExtensionBundle",
"version": "[3.3.0, 4.0.0)"
}
}
NOTE
Version 3.x of the extension bundle currently does not include the Table Storage bindings. If your app requires Table
Storage, you will need to continue using the 2.x version for now.
host.json settings
This section describes the configuration settings available for this binding in versions 2.x and higher. Settings in
the host.json file apply to all functions in a function app instance. The example host.json file below contains only
the version 2.x+ settings for this binding. For more information about function app configuration settings in
versions 2.x and later versions, see host.json reference for Azure Functions.
NOTE
For a reference of host.json in Functions 1.x, see host.json reference for Azure Functions 1.x.
{
"version": "2.0",
"extensions": {
"queues": {
"maxPollingInterval": "00:00:02",
"visibilityTimeout" : "00:00:30",
"batchSize": 16,
"maxDequeueCount": 5,
"newBatchThreshold": 8,
"messageEncoding": "base64"
}
}
}
Next steps
Run a function as queue storage data changes (Trigger)
Write queue storage messages (Output binding)
Azure Queue storage trigger for Azure Functions
8/2/2022 • 17 minutes to read • Edit Online
The queue storage trigger runs a function as messages are added to Azure Queue storage.
Example
Use the queue trigger to start a function when a new item is received on a queue. The queue message is
provided as input to the function.
A C# function can be created using one of the following C# modes:
In-process class library: compiled C# function that runs in the same process as the Functions runtime.
Isolated process class library: compiled C# function that runs in a process isolated from the runtime. Isolated
process is required to support C# functions running on .NET 5.0.
C# script: used primarily when creating C# functions in the Azure portal.
In-process
Isolated process
C# Script
The following example shows a C# function that polls the myqueue-items queue and writes a log each time a
queue item is processed.
The following Java example shows a storage queue trigger function, which logs the triggered message placed
into queue myqueuename .
@FunctionName("queueprocessor")
public void run(
@QueueTrigger(name = "msg",
queueName = "myqueuename",
connection = "myconnvarname") String message,
final ExecutionContext context
) {
context.getLogger().info(message);
}
The following example shows a queue trigger binding in a function.json file and a JavaScript function that uses
the binding. The function polls the myqueue-items queue and writes a log each time a queue item is processed.
Here's the function.json file:
{
"disabled": false,
"bindings": [
{
"type": "queueTrigger",
"direction": "in",
"name": "myQueueItem",
"queueName": "myqueue-items",
"connection":"MyStorageConnectionAppSetting"
}
]
}
NOTE
The name parameter reflects as context.bindings.<name> in the JavaScript code which contains the queue item
payload. This payload is also passed as the second parameter to the function.
The usage section explains myQueueItem , which is named by the name property in function.json. The message
metadata section explains all of the other variables shown.
The following example demonstrates how to read a queue message passed to a function via a trigger.
A Storage queue trigger is defined in function.json file where type is set to queueTrigger .
{
"bindings": [
{
"name": "QueueItem",
"type": "queueTrigger",
"direction": "in",
"queueName": "messages",
"connection": "MyStorageConnectionAppSetting"
}
]
}
The code in the Run.ps1 file declares a parameter as $QueueItem , which allows you to read the queue message
in your function.
# Input bindings are passed in via param block.
param([string] $QueueItem, $TriggerMetadata)
# Write out the queue message and metadata to the information log.
Write-Host "PowerShell queue trigger function processed work item: $QueueItem"
Write-Host "Queue item expiration time: $($TriggerMetadata.ExpirationTime)"
Write-Host "Queue item insertion time: $($TriggerMetadata.InsertionTime)"
Write-Host "Queue item next visible time: $($TriggerMetadata.NextVisibleTime)"
Write-Host "ID: $($TriggerMetadata.Id)"
Write-Host "Pop receipt: $($TriggerMetadata.PopReceipt)"
Write-Host "Dequeue count: $($TriggerMetadata.DequeueCount)"
The following example demonstrates how to read a queue message passed to a function via a trigger.
A Storage queue trigger is defined in function.json where type is set to queueTrigger .
{
"scriptFile": "__init__.py",
"bindings": [
{
"name": "msg",
"type": "queueTrigger",
"direction": "in",
"queueName": "messages",
"connection": "AzureStorageQueuesConnectionString"
}
]
}
The code _init_.py declares a parameter as func.QueueMessage , which allows you to read the queue message in
your function.
import logging
import json
result = json.dumps({
'id': msg.id,
'body': msg.get_body().decode('utf-8'),
'expiration_time': (msg.expiration_time.isoformat()
if msg.expiration_time else None),
'insertion_time': (msg.insertion_time.isoformat()
if msg.insertion_time else None),
'time_next_visible': (msg.time_next_visible.isoformat()
if msg.time_next_visible else None),
'pop_receipt': msg.pop_receipt,
'dequeue_count': msg.dequeue_count
})
logging.info(result)
Attributes
Both in-process and isolated process C# libraries use the QueueTriggerAttribute to define the function. C# script
instead uses a function.json configuration file.
In-process
Isolated process
C# script
In C# class libraries, the attribute's constructor takes the name of the queue to monitor, as shown in the
following example:
[FunctionName("QueueTrigger")]
public static void Run(
[QueueTrigger("myqueue-items")] string myQueueItem,
ILogger log)
{
...
}
You can set the Connection property to specify the app setting that contains the storage account connection
string to use, as shown in the following example:
[FunctionName("QueueTrigger")]
public static void Run(
[QueueTrigger("myqueue-items", Connection = "StorageConnectionAppSetting")] string myQueueItem,
ILogger log)
{
....
}
Annotations
The QueueTrigger annotation gives you access to the queue that triggers the function. The following example
makes the queue message available to the function via the message parameter.
package com.function;
import com.microsoft.azure.functions.annotation.*;
import java.util.Queue;
import com.microsoft.azure.functions.*;
name The name of the variable that contains the queue item
payload in the function code.
Usage
NOTE
Functions expect a base64 encoded string. Any adjustments to the encoding type (in order to prepare data as a base64
encoded string) need to be implemented in the calling service.
The usage of the Queue trigger depends on the extension package version, and the C# modality used in your
function app, which can be one of the following:
An in-process class library is a compiled C# function runs in the same process as the Functions runtime.
Choose a version to see usage details for the mode and version.
Extension 5.x+
Extension 2.x+
Access the message data by using a method parameter such as string paramName . The paramName is the value
specified in the QueueTriggerAttribute. You can bind to any of the following types:
Plain-old CLR object (POCO)
string
byte[]
QueueMessage
When binding to an object, the Functions runtime tries to deserialize the JSON payload into an instance of an
arbitrary class defined in your code. For examples using QueueMessage, see the GitHub repository for the
extension.
While the attribute takes a Connection property, you can also use the StorageAccountAttribute to specify a
storage account connection. You can do this when you need to use a different storage account than other
functions in the library. The constructor takes the name of an app setting that contains a storage connection
string. The attribute can be applied at the parameter, method, or class level. The following example shows class
level and method level:
[StorageAccount("ClassLevelStorageAppSetting")]
public static class AzureFunctions
{
[FunctionName("StorageTrigger")]
[StorageAccount("FunctionLevelStorageAppSetting")]
public static void Run( //...
{
...
}
Metadata
The queue trigger provides several metadata properties. These properties can be used as part of binding
expressions in other bindings or as parameters in your code.
The properties are members of the CloudQueueMessage class.
Connections
The connection property is a reference to environment configuration which specifies how the app should
connect to Azure Queues. It may specify:
The name of an application setting containing a connection string
The name of a shared prefix for multiple application settings, together defining an identity-based connection.
If the configured value is both an exact match for a single setting and a prefix match for other settings, the exact
match is used.
Connection string
To obtain a connection string, follow the steps shown at Manage storage account access keys.
This connection string should be stored in an application setting with a name matching the value specified by
the connection property of the binding configuration.
If the app setting name begins with "AzureWebJobs", you can specify only the remainder of the name here. For
example, if you set connection to "MyStorage", the Functions runtime looks for an app setting that is named
"AzureWebJobsMyStorage." If you leave connection empty, the Functions runtime uses the default Storage
connection string in the app setting that is named AzureWebJobsStorage .
Identity-based connections
If you are using version 5.x or higher of the extension, instead of using a connection string with a secret, you can
have the app use an Azure Active Directory identity. To do this, you would define settings under a common
prefix which maps to the connection property in the trigger and binding configuration.
If you are setting connection to "AzureWebJobsStorage", see Connecting to host storage with an identity. For all
other connections, the extension requires the following properties:
EN VIRO N M EN T VA RIA B L E
P RO P ERT Y T EM P L AT E DESC RIP T IO N EXA M P L E VA L UE
1
1 <CONNECTION_NAME_PREFIX>__serviceUri can be used as an alias. If both forms are provided, the queueServiceUri
form will be used. The serviceUri form cannot be used when the overall connection configuration is to be used
across blobs, queues, and/or tables.
Additional properties may be set to customize the connection. See Common properties for identity-based
connections.
When hosted in the Azure Functions service, identity-based connections use a managed identity. The system-
assigned identity is used by default, although a user-assigned identity can be specified with the credential and
clientID properties. Note that configuring a user-assigned identity with a resource ID is not supported. When
run in other contexts, such as local development, your developer identity is used instead, although this can be
customized. See Local development with identity-based connections.
Grant permission to the identity
Whatever identity is being used must have permissions to perform the intended actions. You will need to assign
a role in Azure RBAC, using either built-in or custom roles which provide those permissions.
IMPORTANT
Some permissions might be exposed by the target service that are not necessary for all contexts. Where possible, adhere
to the principle of least privilege , granting the identity only required privileges. For example, if the app only needs to
be able to read from a data source, use a role that only has permission to read. It would be inappropriate to assign a role
that also allows writing to that service, as this would be excessive permission for a read operation. Similarly, you would
want to ensure the role assignment is scoped only over the resources that need to be read.
You will need to create a role assignment that provides access to your queue at runtime. Management roles like
Owner are not sufficient. The following table shows built-in roles that are recommended when using the Queue
Storage extension in normal operation. Your application may require additional permissions based on the code
you write.
Poison messages
When a queue trigger function fails, Azure Functions retries the function up to five times for a given queue
message, including the first try. If all five attempts fail, the functions runtime adds a message to a queue named
<originalqueuename>-poison. You can write a function to process messages from the poison queue by logging
them or sending a notification that manual attention is needed.
To handle poison messages manually, check the dequeueCount of the queue message.
Peek lock
The peek-lock pattern happens automatically for queue triggers. As messages are dequeued, they are marked as
invisible and associated with a timeout managed by the Storage service.
When the function starts, it starts processing a message under the following conditions.
If the function is successful, then the function execution completes and the message is deleted.
If the function fails, then the message visibility is reset. After being reset, the message is reprocessed the next
time the function requests a new message.
If the function never completes due to a crash, the message visibility expires and the message re-appears in
the queue.
All of the visibility mechanics are handled by the Storage service, not the Functions runtime.
Polling algorithm
The queue trigger implements a random exponential back-off algorithm to reduce the effect of idle-queue
polling on storage transaction costs.
The algorithm uses the following logic:
When a message is found, the runtime waits 100 milliseconds and then checks for another message
When no message is found, it waits about 200 milliseconds before trying again.
After subsequent failed attempts to get a queue message, the wait time continues to increase until it reaches
the maximum wait time, which defaults to one minute.
The maximum wait time is configurable via the maxPollingInterval property in the host.json file.
For local development the maximum polling interval defaults to two seconds.
NOTE
In regards to billing when hosting function apps in the Consumption plan, you are not charged for time spent polling by
the runtime.
Concurrency
When there are multiple queue messages waiting, the queue trigger retrieves a batch of messages and invokes
function instances concurrently to process them. By default, the batch size is 16. When the number being
processed gets down to 8, the runtime gets another batch and starts processing those messages. So the
maximum number of concurrent messages being processed per function on one virtual machine (VM) is 24.
This limit applies separately to each queue-triggered function on each VM. If your function app scales out to
multiple VMs, each VM will wait for triggers and attempt to run functions. For example, if a function app scales
out to 3 VMs, the default maximum number of concurrent instances of one queue-triggered function is 72.
The batch size and the threshold for getting a new batch are configurable in the host.json file. If you want to
minimize parallel execution for queue-triggered functions in a function app, you can set the batch size to 1. This
setting eliminates concurrency only so long as your function app runs on a single virtual machine (VM).
The queue trigger automatically prevents a function from processing a queue message multiple times
simultaneously.
host.json properties
The host.json file contains settings that control queue trigger behavior. See the host.json settings section for
details regarding available settings.
Next steps
Write queue storage messages (Output binding)
Azure Queue storage output bindings for Azure
Functions
8/2/2022 • 16 minutes to read • Edit Online
Azure Functions can create new Azure Queue storage messages by setting up an output binding.
For information on setup and configuration details, see the overview.
Example
A C# function can be created using one of the following C# modes:
In-process class library: compiled C# function that runs in the same process as the Functions runtime.
Isolated process class library: compiled C# function that runs in a process isolated from the runtime. Isolated
process is required to support C# functions running on .NET 5.0.
C# script: used primarily when creating C# functions in the Azure portal.
In-process
Isolated process
C# Script
The following example shows a C# function that creates a queue message for each HTTP request received.
[StorageAccount("MyStorageConnectionAppSetting")]
public static class QueueFunctions
{
[FunctionName("QueueOutput")]
[return: Queue("myqueue-items")]
public static string QueueOutput([HttpTrigger] dynamic input, ILogger log)
{
log.LogInformation($"C# function processed: {input.Text}");
return input.Text;
}
}
The following example shows a Java function that creates a queue message for when triggered by an HTTP
request.
@FunctionName("httpToQueue")
@QueueOutput(name = "item", queueName = "myqueue-items", connection = "MyStorageConnectionAppSetting")
public String pushToQueue(
@HttpTrigger(name = "request", methods = {HttpMethod.POST}, authLevel = AuthorizationLevel.ANONYMOUS)
final String message,
@HttpOutput(name = "response") final OutputBinding<String> result) {
result.setValue(message + " has been added.");
return message;
}
In the Java functions runtime library, use the @QueueOutput annotation on parameters whose value would be
written to Queue storage. The parameter type should be OutputBinding<T> , where T is any native Java type of
a POJO.
The following example shows an HTTP trigger binding in a function.json file and a JavaScript function that uses
the binding. The function creates a queue item for each HTTP request received.
Here's the function.json file:
{
"bindings": [
{
"type": "httpTrigger",
"direction": "in",
"authLevel": "function",
"name": "input"
},
{
"type": "http",
"direction": "out",
"name": "$return"
},
{
"type": "queue",
"direction": "out",
"name": "myQueueItem",
"queueName": "outqueue",
"connection": "MyStorageConnectionAppSetting"
}
]
}
You can send multiple messages at once by defining a message array for the myQueueItem output binding. The
following JavaScript code sends two queue messages with hard-coded values for each HTTP request received.
The following code examples demonstrate how to output a queue message from an HTTP-triggered function.
The configuration section with the type of queue defines the output binding.
{
"bindings": [
{
"authLevel": "anonymous",
"type": "httpTrigger",
"direction": "in",
"name": "Request",
"methods": [
"get",
"post"
]
},
{
"type": "http",
"direction": "out",
"name": "Response"
},
{
"type": "queue",
"direction": "out",
"name": "Msg",
"queueName": "outqueue",
"connection": "MyStorageConnectionAppSetting"
}
]
}
Using this binding configuration, a PowerShell function can create a queue message using Push-OutputBinding .
In this example, a message is created from a query string or body parameter.
To send multiple messages at once, define a message array and use Push-OutputBinding to send messages to
the Queue output binding.
using namespace System.Net
The following example demonstrates how to output single and multiple values to storage queues. The
configuration needed for function.json is the same either way.
A Storage queue binding is defined in function.json where type is set to queue .
{
"scriptFile": "__init__.py",
"bindings": [
{
"authLevel": "function",
"type": "httpTrigger",
"direction": "in",
"name": "req",
"methods": [
"get",
"post"
]
},
{
"type": "http",
"direction": "out",
"name": "$return"
},
{
"type": "queue",
"direction": "out",
"name": "msg",
"queueName": "outqueue",
"connection": "AzureStorageQueuesConnectionString"
}
]
}
To set an individual message on the queue, you pass a single value to the set method.
input_msg = req.params.get('message')
msg.set(input_msg)
return 'OK'
To create multiple messages on the queue, declare a parameter as the appropriate list type and pass an array of
values (that match the list type) to the set method.
msg.set(['one', 'two'])
return 'OK'
Attributes
The attribute that defines an output binding in C# libraries depends on the mode in which the C# class library
runs. C# script instead uses a function.json configuration file.
In-process
Isolated process
C# script
[FunctionName("QueueOutput")]
[return: Queue("myqueue-items")]
public static string Run([HttpTrigger] dynamic input, ILogger log)
{
...
}
You can set the Connection property to specify the storage account to use, as shown in the following example:
[FunctionName("QueueOutput")]
[return: Queue("myqueue-items", Connection = "StorageConnectionAppSetting")]
public static string Run([HttpTrigger] dynamic input, ILogger log)
{
...
}
You can use the StorageAccount attribute to specify the storage account at class, method, or parameter level. For
more information, see Trigger - attributes.
Annotations
The QueueOutput annotation allows you to write a message as the output of a function. The following example
shows an HTTP-triggered function that creates a queue message.
package com.function;
import java.util.*;
import com.microsoft.azure.functions.annotation.*;
import com.microsoft.azure.functions.*;
message.setValue(request.getQueryParameters().get("name"));
return request.createResponseBuilder(HttpStatus.OK).body("Done").build();
}
}
The parameter associated with the QueueOutput annotation is typed as an OutputBinding<T> instance.
Configuration
The following table explains the binding configuration properties that you set in the function.json file.
When you're developing locally, add your application settings in the local.settings.json file in the Values
collection.
See the Example section for complete examples.
Usage
The usage of the Queue output binding depends on the extension package version and the C# modality used in
your function app, which can be one of the following:
In-process
Isolated process
C# script
An in-process class library is a compiled C# function runs in the same process as the Functions runtime.
Choose a version to see usage details for the mode and version.
Extension 5.x+
Extension 2.x+
Write a single queue message by using a method parameter such as out T paramName . You can use the method
return type instead of an out parameter, and T can be any of the following types:
An object serializable as JSON
string
byte[]
QueueMessage
For examples using these types, see the GitHub repository for the extension.
You can write multiple messages to the queue by using one of the following types:
ICollector<T> or IAsyncCollector<T>
QueueClient
For examples using QueueMessage and QueueClient, see the GitHub repository for the extension.
While the attribute takes a Connection property, you can also use the StorageAccountAttribute to specify a
storage account connection. You can do this when you need to use a different storage account than other
functions in the library. The constructor takes the name of an app setting that contains a storage connection
string. The attribute can be applied at the parameter, method, or class level. The following example shows class
level and method level:
[StorageAccount("ClassLevelStorageAppSetting")]
public static class AzureFunctions
{
[FunctionName("StorageTrigger")]
[StorageAccount("FunctionLevelStorageAppSetting")]
public static void Run( //...
{
...
}
The output queue item is available via context.bindings.<NAME> where <NAME> matches the name defined in
function.json. You can use a string or a JSON-serializable object for the queue item payload.
Output to the queue message is available via Push-OutputBinding where you pass arguments that match the
name designated by binding's name parameter in the function.json file.
There are two options for writing from your function to the configured queue:
Return value : Set the name property in function.json to $return . With this configuration, the function's
return value is persisted as a Queue storage message.
Imperative : Pass a value to the set method of the parameter declared as an Out type. The value passed
to set is persisted as a Queue storage message.
Connections
The connection property is a reference to environment configuration which specifies how the app should
connect to Azure Queues. It may specify:
The name of an application setting containing a connection string
The name of a shared prefix for multiple application settings, together defining an identity-based connection.
If the configured value is both an exact match for a single setting and a prefix match for other settings, the exact
match is used.
Connection string
To obtain a connection string, follow the steps shown at Manage storage account access keys.
This connection string should be stored in an application setting with a name matching the value specified by
the connection property of the binding configuration.
If the app setting name begins with "AzureWebJobs", you can specify only the remainder of the name here. For
example, if you set connection to "MyStorage", the Functions runtime looks for an app setting that is named
"AzureWebJobsMyStorage." If you leave connection empty, the Functions runtime uses the default Storage
connection string in the app setting that is named AzureWebJobsStorage .
Identity-based connections
If you are using version 5.x or higher of the extension, instead of using a connection string with a secret, you can
have the app use an Azure Active Directory identity. To do this, you would define settings under a common
prefix which maps to the connection property in the trigger and binding configuration.
If you are setting connection to "AzureWebJobsStorage", see Connecting to host storage with an identity. For all
other connections, the extension requires the following properties:
EN VIRO N M EN T VA RIA B L E
P RO P ERT Y T EM P L AT E DESC RIP T IO N EXA M P L E VA L UE
1 <CONNECTION_NAME_PREFIX>__serviceUri can be used as an alias. If both forms are provided, the queueServiceUri
form will be used. The serviceUri form cannot be used when the overall connection configuration is to be used
across blobs, queues, and/or tables.
Additional properties may be set to customize the connection. See Common properties for identity-based
connections.
When hosted in the Azure Functions service, identity-based connections use a managed identity. The system-
assigned identity is used by default, although a user-assigned identity can be specified with the credential and
clientID properties. Note that configuring a user-assigned identity with a resource ID is not supported. When
run in other contexts, such as local development, your developer identity is used instead, although this can be
customized. See Local development with identity-based connections.
Grant permission to the identity
Whatever identity is being used must have permissions to perform the intended actions. You will need to assign
a role in Azure RBAC, using either built-in or custom roles which provide those permissions.
IMPORTANT
Some permissions might be exposed by the target service that are not necessary for all contexts. Where possible, adhere
to the principle of least privilege , granting the identity only required privileges. For example, if the app only needs to
be able to read from a data source, use a role that only has permission to read. It would be inappropriate to assign a role
that also allows writing to that service, as this would be excessive permission for a read operation. Similarly, you would
want to ensure the role assignment is scoped only over the resources that need to be read.
You will need to create a role assignment that provides access to your queue at runtime. Management roles like
Owner are not sufficient. The following table shows built-in roles that are recommended when using the Queue
Storage extension in normal operation. Your application may require additional permissions based on the code
you write.
Next steps
Run a function as queue storage data changes (Trigger)
RabbitMQ bindings for Azure Functions overview
8/2/2022 • 2 minutes to read • Edit Online
NOTE
The RabbitMQ bindings are only fully supported on Premium and Dedicated App Service plans. Consumption plans aren't
supported.
RabbitMQ bindings are only supported for Azure Functions version 3.x and later versions.
Azure Functions integrates with RabbitMQ via triggers and bindings. The Azure Functions RabbitMQ extension
allows you to send and receive messages using the RabbitMQ API with Functions.
A C T IO N TYPE
Prerequisites
Before working with the RabbitMQ extension, you must set up your RabbitMQ endpoint. To learn more about
RabbitMQ, see the getting started page.
Install extension
The extension NuGet package you install depends on the C# mode you're using in your function app:
In-process
Isolated process
C# script
Functions execute in the same process as the Functions host. To learn more, see Develop C# class library
functions using Azure Functions.
Add the extension to your project by installing this NuGet package.
Install bundle
The RabbitMQ extension is part of an extension bundle, which is specified in your host.json project file. When
you create a project that targets version 3.x or later, you should already have this bundle installed. To learn more,
see extension bundle.
Next steps
Run a function when a RabbitMQ message is created (Trigger)
Send RabbitMQ messages from Azure Functions (Output binding)
RabbitMQ trigger for Azure Functions overview
8/2/2022 • 9 minutes to read • Edit Online
NOTE
The RabbitMQ bindings are only fully supported on Premium and Dedicated plans. Consumption is not supported.
Example
A C# function can be created using one of the following C# modes:
In-process class library: compiled C# function that runs in the same process as the Functions runtime.
Isolated process class library: compiled C# function that runs in a process isolated from the runtime. Isolated
process is required to support C# functions running on .NET 5.0.
C# script: used primarily when creating C# functions in the Azure portal.
In-process
Isolated process
C# Script
The following example shows a C# function that reads and logs the RabbitMQ message as a RabbitMQ Event:
[FunctionName("RabbitMQTriggerCSharp")]
public static void RabbitMQTrigger_BasicDeliverEventArgs(
[RabbitMQTrigger("queue", ConnectionStringSetting = "rabbitMQConnectionAppSetting")]
BasicDeliverEventArgs args,
ILogger logger
)
{
logger.LogInformation($"C# RabbitMQ queue trigger function processed message:
{Encoding.UTF8.GetString(args.Body)}");
}
Like with Json objects, an error will occur if the message isn't properly formatted as a C# object. If it is, it's then
bound to the variable pocObj, which can be used for what whatever it's needed for.
The following Java function uses the @RabbitMQTrigger annotation from the Java RabbitMQ types to describe
the configuration for a RabbitMQ queue trigger. The function grabs the message placed on the queue and adds
it to the logs.
@FunctionName("RabbitMQTriggerExample")
public void run(
@RabbitMQTrigger(connectionStringSetting = "rabbitMQConnectionAppSetting", queueName = "queue") String
input,
final ExecutionContext context)
{
context.getLogger().info("Java HTTP trigger processed a request." + input);
}
The following example shows a RabbitMQ trigger binding in a function.json file and a JavaScript function that
uses the binding. The function reads and logs a RabbitMQ message.
Here's the binding data in the function.json file:
{
"bindings": [
{
"name": "myQueueItem",
"type": "rabbitMQTrigger",
"direction": "in",
"queueName": "queue",
"connectionStringSetting": "rabbitMQConnectionAppSetting"
}
]
}
The following example demonstrates how to read a RabbitMQ queue message via a trigger.
A RabbitMQ binding is defined in function.json where type is set to RabbitMQTrigger .
{
"scriptFile": "__init__.py",
"bindings": [
{
"name": "myQueueItem",
"type": "rabbitMQTrigger",
"direction": "in",
"queueName": "queue",
"connectionStringSetting": "rabbitMQConnectionAppSetting"
}
]
}
import logging
import azure.functions as func
Attributes
Both in-process and isolated process C# libraries use the attribute to define the function. C# script instead uses a
function.json configuration file.
The attribute's constructor takes the following parameters:
PA RA M ET ER DESC RIP T IO N
ConnectionStringSetting The name of the app setting that contains the RabbitMQ
message queue connection string. The trigger won't work
when you specify the connection string directly instead
through an app setting. For example, when you have set
ConnectionStringSetting: "rabbitMQConnection" , then
in both the local.settings.json and in your function app you
need a setting like
"RabbitMQConnection" : "< ActualConnectionstring >" .
Por t Gets or sets the port used. Defaults to 0, which points to the
RabbitMQ client's default port setting of 5672 .
In-process
Isolated process
C# script
[FunctionName("RabbitMQTest")]
public static void RabbitMQTest([RabbitMQTrigger("queue")] string message, ILogger log)
{
...
}
Annotations
The RabbitMQTrigger annotation allows you to create a function that runs when a RabbitMQ message is created.
The annotation supports the following configuration options:
PA RA M ET ER DESC RIP T IO N
connectionStringSetting The name of the app setting that contains the RabbitMQ
message queue connection string. The trigger won't work
when you specify the connection string directly instead
through an app setting. For example, when you have set
ConnectionStringSetting: "rabbitMQConnection" , then
in both the local.settings.json and in your function app you
need a setting like
"RabbitMQConnection" : "< ActualConnectionstring >" .
por t Gets or sets the port used. Defaults to 0, which points to the
RabbitMQ client's default port setting of 5672 .
Configuration
The following table explains the binding configuration properties that you set in the function.json file.
F UN C T IO N . JSO N P RO P ERT Y DESC RIP T IO N
connectionStringSetting The name of the app setting that contains the RabbitMQ
message queue connection string. The trigger won't work
when you specify the connection string directly instead
through an app setting. For example, when you have set
connectionStringSetting: "rabbitMQConnection" , then
in both the local.settings.json and in your function app you
need a setting like
"rabbitMQConnection" : "< ActualConnectionstring >" .
por t Gets or sets the port used. Defaults to 0, which points to the
RabbitMQ client's default port setting of 5672 .
When you're developing locally, add your application settings in the local.settings.json file in the Values
collection.
See the Example section for complete examples.
Usage
The parameter type supported by the RabbitMQ trigger depends on the C# modality used.
In-process
Isolated process
C# script
The default message type is RabbitMQ Event, and the Body property of the RabbitMQ Event can be read as the
types listed below:
An object serializable as JSON - The message is delivered as a valid JSON string.
string
byte[]
POCO - The message is formatted as a C# object. For complete code, see C# example.
For a complete example, see C# example.
Refer to Java annotations.
The queue message is available via context.bindings.<NAME> where <NAME> matches the name defined in
function.json. If the payload is JSON, the value is deserialized into an object.
Refer to the Python example.
host.json settings
This section describes the configuration settings available for this binding in versions 2.x and higher. Settings in
the host.json file apply to all functions in a function app instance. The example host.json file below contains only
the version 2.x+ settings for this binding. For more information about function app configuration settings in
versions 2.x and later versions, see host.json reference for Azure Functions.
{
"version": "2.0",
"extensions": {
"rabbitMQ": {
"prefetchCount": 100,
"queueName": "queue",
"connectionString": "amqp://user:password@url:port",
"port": 10
}
}
}
Local testing
NOTE
The connectionString takes precedence over "hostName", "userName", and "password". If these are all set, the
connectionString will override the other two.
If you're testing locally without a connection string, you should set the "hostName" setting and "userName" and
"password" if applicable in the "rabbitMQ" section of host.json:
{
"version": "2.0",
"extensions": {
"rabbitMQ": {
...
"hostName": "localhost",
"username": "userNameSetting",
"password": "passwordSetting"
}
}
}
In the CLI, you can enable Runtime Scale Monitoring by using the following command:
Next steps
Send RabbitMQ messages from Azure Functions (Output binding)
RabbitMQ output binding for Azure Functions
overview
8/2/2022 • 8 minutes to read • Edit Online
NOTE
The RabbitMQ bindings are only fully supported on Premium and Dedicated plans. Consumption is not supported.
Example
A C# function can be created using one of the following C# modes:
In-process class library: compiled C# function that runs in the same process as the Functions runtime.
Isolated process class library: compiled C# function that runs in a process isolated from the runtime. Isolated
process is required to support C# functions running on .NET 5.0.
C# script: used primarily when creating C# functions in the Azure portal.
In-process
Isolated process
C# Script
The following example shows a C# function that sends a RabbitMQ message when triggered by a TimerTrigger
every 5 minutes using the method return value as the output:
[FunctionName("RabbitMQOutput")]
[return: RabbitMQ(QueueName = "outputQueue", ConnectionStringSetting = "rabbitMQConnectionAppSetting")]
public static string Run([TimerTrigger("0 */5 * * * *")] TimerInfo myTimer, ILogger log)
{
log.LogInformation($"C# Timer trigger function executed at: {DateTime.Now}");
return $"{DateTime.Now}";
}
The following example shows how to use the IAsyncCollector interface to send messages.
[FunctionName("RabbitMQOutput")]
public static async Task Run(
[RabbitMQTrigger("sourceQueue", ConnectionStringSetting = "rabbitMQConnectionAppSetting")] string
rabbitMQEvent,
[RabbitMQ(QueueName = "destinationQueue", ConnectionStringSetting =
"rabbitMQConnectionAppSetting")]IAsyncCollector<string> outputEvents,
ILogger log)
{
// send the message
await outputEvents.AddAsync(JsonConvert.SerializeObject(rabbitMQEvent));
}
The following Java function uses the @RabbitMQOutput annotation from the Java RabbitMQ types to describe the
configuration for a RabbitMQ queue output binding. The function sends a message to the RabbitMQ queue
when triggered by a TimerTrigger every 5 minutes.
@FunctionName("RabbitMQOutputExample")
public void run(
@TimerTrigger(name = "keepAliveTrigger", schedule = "0 */5 * * * *") String timerInfo,
@RabbitMQOutput(connectionStringSetting = "rabbitMQConnectionAppSetting", queueName = "hello")
OutputBinding<String> output,
final ExecutionContext context) {
output.setValue("Some string");
}
The following example shows a RabbitMQ output binding in a function.json file and a JavaScript function that
uses the binding. The function reads in the message from an HTTP trigger and outputs it to the RabbitMQ
queue.
Here's the binding data in the function.json file:
{
"bindings": [
{
"type": "httpTrigger",
"direction": "in",
"authLevel": "function",
"name": "input",
"methods": [
"get",
"post"
]
},
{
"type": "rabbitMQ",
"name": "outputMessage",
"queueName": "outputQueue",
"connectionStringSetting": "rabbitMQConnectionAppSetting",
"direction": "out"
}
]
}
Here's JavaScript code:
The following example shows a RabbitMQ output binding in a function.json file and a Python function that uses
the binding. The function reads in the message from an HTTP trigger and outputs it to the RabbitMQ queue.
Here's the binding data in the function.json file:
{
"scriptFile": "__init__.py",
"bindings": [
{
"authLevel": "function",
"type": "httpTrigger",
"direction": "in",
"name": "req",
"methods": [
"get",
"post"
]
},
{
"type": "http",
"direction": "out",
"name": "$return"
},
{
"type": "rabbitMQ",
"name": "outputMessage",
"queueName": "outputQueue",
"connectionStringSetting": "rabbitMQConnectionAppSetting",
"direction": "out"
}
]
}
In _init_.py:
Attributes
Both in-process and isolated process C# libraries use the attribute to define the function. C# script instead uses a
function.json configuration file.
The attribute's constructor takes the following parameters:
PA RA M ET ER DESC RIP T IO N
ConnectionStringSetting The name of the app setting that contains the RabbitMQ
message queue connection string. The trigger won't work
when you specify the connection string directly instead
through an app setting. For example, when you have set
ConnectionStringSetting: "rabbitMQConnection" , then
in both the local.settings.json and in your function app you
need a setting like
"RabbitMQConnection" : "< ActualConnectionstring >" .
Por t Gets or sets the port used. Defaults to 0, which points to the
RabbitMQ client's default port setting of 5672 .
In-process
Isolated process
C# script
[FunctionName("RabbitMQOutput")]
public static async Task Run(
[RabbitMQTrigger("SourceQueue", ConnectionStringSetting = "TriggerConnectionString")] string rabbitMQEvent,
[RabbitMQ("DestinationQueue", ConnectionStringSetting = "OutputConnectionString")]IAsyncCollector<string>
outputEvents,
ILogger log)
{
...
}
Annotations
The RabbitMQOutput annotation allows you to create a function that runs when a RabbitMQ message is created.
The annotation supports the following configuration settings:
connectionStringSetting The name of the app setting that contains the RabbitMQ
message queue connection string. The trigger won't work
when you specify the connection string directly instead
through an app setting. For example, when you have set
ConnectionStringSetting: "rabbitMQConnection" , then
in both the local.settings.json and in your function app you
need a setting like
"RabbitMQConnection" : "< ActualConnectionstring >" .
por t Gets or sets the port used. Defaults to 0, which points to the
RabbitMQ client's default port setting of 5672 .
Configuration
The following table explains the binding configuration properties that you set in the function.json file.
connectionStringSetting The name of the app setting that contains the RabbitMQ
message queue connection string. The trigger won't work
when you specify the connection string directly instead of
through an app setting in local.settings.json . For
example, when you have set
connectionStringSetting: "rabbitMQConnection" then
in both the local.settings.json and in your function app you
need a setting like
"rabbitMQConnection" : "< ActualConnectionstring >" .
por t Gets or sets the Port used. Defaults to 0, which points to the
RabbitMQ client's default port setting of 5672 .
When you're developing locally, add your application settings in the local.settings.json file in the Values
collection.
See the Example section for complete examples.
Usage
The parameter type supported by the RabbitMQ trigger depends on the Functions runtime version, the
extension package version, and the C# modality used.
In-process
Isolated process
C# script
The queue message is available via context.bindings.<NAME> where <NAME> matches the name defined in
function.json. If the payload is JSON, the value is deserialized into an object.
Refer to the Python example.
Next steps
Run a function when a RabbitMQ message is created (Trigger)
Azure Functions SendGrid bindings
8/2/2022 • 8 minutes to read • Edit Online
This article explains how to send email by using SendGrid bindings in Azure Functions. Azure Functions
supports an output binding for SendGrid.
This is reference information for Azure Functions developers. If you're new to Azure Functions, start with the
following resources:
Azure Functions developer reference.
Create your first function.
C# developer references:
In-process class library
Isolated process class library
C# script
Create your first function.
JavaScript developer reference.
Create your first function.
Java developer reference.
Create your first function.
Python developer reference
Create your first function.
PowerShell developer reference
Azure Functions triggers and bindings concepts..
Code and test Azure Functions locally..
Install extension
The extension NuGet package you install depends on the C# mode you're using in your function app:
In-process
Isolated process
C# script
Functions execute in the same process as the Functions host. To learn more, see Develop C# class library
functions using Azure Functions.
The functionality of the extension varies depending on the extension version:
Functions v2.x+
Functions v1.x
Add the extension to your project by installing the NuGet package, version 3.x.
Install bundle
Starting with Functions version 2.x, the HTTP extension is part of an extension bundle, which is specified in your
host.json project file. To learn more, see extension bundle.
Bundle v2.x
Functions 1.x
This version of the extension should already be available to your function app with extension bundle, version 2.x.
Example
A C# function can be created using one of the following C# modes:
In-process class library: compiled C# function that runs in the same process as the Functions runtime.
Isolated process class library: compiled C# function that runs in a process isolated from the runtime. Isolated
process is required to support C# functions running on .NET 5.0.
C# script: used primarily when creating C# functions in the Azure portal.
In-process
Isolated process
C# Script
The following examples shows a C# function that uses a Service Bus queue trigger and a SendGrid output
binding.
The following example is a synchronous execution:
using SendGrid.Helpers.Mail;
using System.Text.Json;
...
[FunctionName("SendEmail")]
public static void Run(
[ServiceBusTrigger("myqueue", Connection = "ServiceBusConnection")] Message email,
[SendGrid(ApiKey = "CustomSendGridKeyAppSettingName")] out SendGridMessage message)
{
var emailObject = JsonSerializer.Deserialize<OutgoingEmail>(Encoding.UTF8.GetString(email.Body));
...
[FunctionName("SendEmail")]
public static async Task Run(
[ServiceBusTrigger("myqueue", Connection = "ServiceBusConnection")] Message email,
[SendGrid(ApiKey = "CustomSendGridKeyAppSettingName")] IAsyncCollector<SendGridMessage> messageCollector)
{
var emailObject = JsonSerializer.Deserialize<OutgoingEmail>(Encoding.UTF8.GetString(email.Body));
await messageCollector.AddAsync(message);
}
You can omit setting the attribute's ApiKey property if you have your API key in an app setting named
"AzureWebJobsSendGridApiKey".
The following example shows a SendGrid output binding in a function.json file and a JavaScript function that
uses the binding.
Here's the binding data in the function.json file:
{
"bindings": [
{
"name": "$return",
"type": "sendGrid",
"direction": "out",
"apiKey" : "MySendGridKey",
"to": "{ToEmail}",
"from": "{FromEmail}",
"subject": "SendGrid output bindings"
}
]
}
return message;
};
{
"scriptFile": "__init__.py",
"bindings": [
{
"type": "httpTrigger",
"authLevel": "function",
"direction": "in",
"name": "req",
"methods": ["get", "post"]
},
{
"type": "http",
"direction": "out",
"name": "$return"
},
{
"type": "sendGrid",
"name": "sendGridMessage",
"direction": "out",
"apiKey": "SendGrid_API_Key",
"from": "sender@contoso.com"
}
]
}
The following function shows how you can provide custom values for optional properties.
import logging
import json
import azure.functions as func
message = {
"personalizations": [ {
"to": [{
"email": "user@contoso.com"
}]}],
"subject": "Azure Functions email with SendGrid",
"content": [{
"type": "text/plain",
"value": value }]}
sendGridMessage.set(json.dumps(message))
return func.HttpResponse(f"Sent")
The following example uses the @SendGridOutput annotation from the Java functions runtime library to send an
email using the SendGrid output binding.
package com.function;
import java.util.*;
import com.microsoft.azure.functions.annotation.*;
import com.microsoft.azure.functions.*;
@FunctionName("HttpTriggerSendGrid")
public HttpResponseMessage run(
@HttpTrigger(
name = "req",
methods = { HttpMethod.GET, HttpMethod.POST },
authLevel = AuthorizationLevel.FUNCTION)
HttpRequestMessage<Optional<String>> request,
@SendGridOutput(
name = "message",
dataType = "String",
apiKey = "SendGrid_API_Key",
to = "user@contoso.com",
from = "sender@contoso.com",
subject = "Azure Functions email with SendGrid",
text = "Sent from Azure Functions")
OutputBinding<String> message,
message.setValue(body);
return request.createResponseBuilder(HttpStatus.OK).body("Sent").build();
}
}
Attributes
Both in-process and isolated process C# libraries use attributes to define the output binding. C# script instead
uses a function.json configuration file.
In-process
Isolated process
C# Script
In in-process function apps, use the SendGridAttribute, which supports the following parameters.
ApiKey The name of an app setting that contains your API key. If
not set, the default app setting name is
AzureWebJobsSendGridApiKey .
AT T RIB UT E/ A N N OTAT IO N P RO P ERT Y DESC RIP T IO N
Annotations
The SendGridOutput annotation allows you to declaratively configure the SendGrid binding by providing the
following configuration values.
apiKey
dataType
name
to
from
subject
text
Configuration
The following table lists the binding configuration properties available in the function.json file and the SendGrid
attribute/annotation.
name The variable name used in function code for the request or
request body. This value is $return when there is only one
return value.
apiKey The name of an app setting that contains your API key. If
not set, the default app setting name is
AzureWebJobsSendGridApiKey.
Optional properties may have default values defined in the binding and either added or overridden
programmatically.
When you're developing locally, add your application settings in the local.settings.json file in the Values
collection.
host.json settings
This section describes the configuration settings available for this binding in versions 2.x and higher. Settings in
the host.json file apply to all functions in a function app instance. The example host.json file below contains only
the version 2.x+ settings for this binding. For more information about function app configuration settings in
versions 2.x and later versions, see host.json reference for Azure Functions.
NOTE
For a reference of host.json in Functions 1.x, see host.json reference for Azure Functions 1.x.
{
"version": "2.0",
"extensions": {
"sendGrid": {
"from": "Azure Functions <samples@functions.com>"
}
}
}
Next steps
Learn more about Azure functions triggers and bindings
Azure Service Bus bindings for Azure Functions
8/2/2022 • 7 minutes to read • Edit Online
Azure Functions integrates with Azure Service Bus via triggers and bindings. Integrating with Service Bus allows
you to build functions that react to and send queue or topic messages.
A C T IO N TYPE
Install extension
The extension NuGet package you install depends on the C# mode you're using in your function app:
In-process
Isolated process
C# script
Functions execute in the same process as the Functions host. To learn more, see Develop C# class library
functions using Azure Functions.
Add the extension to your project installing this NuGet package.
The functionality of the extension varies depending on the extension version:
Extension 5.x+
Functions 2.x+
Functions 1.x
This version introduces the ability to connect using an identity instead of a secret. For a tutorial on configuring
your function apps with managed identities, see the creating a function app with identity-based connections
tutorial.
This version allows you to bind to types from Azure.Messaging.ServiceBus.
This extension version is available by installing the NuGet package, version 5.x or later.
Install bundle
The Service Bus binding is part of an extension bundle, which is specified in your host.json project file. You may
need to modify this bundle to change the version of the binding, or if bundles aren't already installed. To learn
more, see extension bundle.
Bundle v3.x
Bundle v2.x
Functions 1.x
This version introduces the ability to connect using an identity instead of a secret. For a tutorial on configuring
your function apps with managed identities, see the creating a function app with identity-based connections
tutorial.
You can add this version of the extension from the extension bundle v3 by adding or replacing the following
code in your host.json file:
{
"version": "2.0",
"extensionBundle": {
"id": "Microsoft.Azure.Functions.ExtensionBundle",
"version": "[3.3.0, 4.0.0)"
}
}
host.json settings
This section describes the configuration settings available for this binding, which depends on the runtime and
extension version.
Extension 5.x+
Functions 2.x+
Functions 1.x
{
"version": "2.0",
"extensions": {
"serviceBus": {
"clientRetryOptions":{
"mode": "exponential",
"tryTimeout": "00:01:00",
"delay": "00:00:00.80",
"maxDelay": "00:01:00",
"maxRetries": 3
},
"prefetchCount": 0,
"transportType": "amqpWebSockets",
"webProxy": "https://proxyserver:8080",
"autoCompleteMessages": true,
"maxAutoLockRenewalDuration": "00:05:00",
"maxConcurrentCalls": 16,
"maxConcurrentSessions": 8,
"maxMessageBatchSize": 1000,
"sessionIdleTimeout": "00:01:00",
"enableCrossEntityTransactions": false
}
}
}
When you set the isSessionsEnabled property or attribute on the trigger to true , the sessionHandlerOptions is
honored. When you set the isSessionsEnabled property or attribute on the trigger to false , the
messageHandlerOptions is honored.
Next steps
Run a function when a Service Bus queue or topic message is created (Trigger)
Send Azure Service Bus messages from Azure Functions (Output binding)
Azure Service Bus trigger for Azure Functions
8/2/2022 • 22 minutes to read • Edit Online
Use the Service Bus trigger to respond to messages from a Service Bus queue or topic. Starting with extension
version 3.1.0, you can trigger on a session-enabled queue or topic.
For information on setup and configuration details, see the overview.
Example
A C# function can be created using one of the following C# modes:
In-process class library: compiled C# function that runs in the same process as the Functions runtime.
Isolated process class library: compiled C# function that runs in a process isolated from the runtime. Isolated
process is required to support C# functions running on .NET 5.0.
C# script: used primarily when creating C# functions in the Azure portal.
In-process
Isolated process
C# Script
The following example shows a C# function that reads message metadata and logs a Service Bus queue
message:
[FunctionName("ServiceBusQueueTriggerCSharp")]
public static void Run(
[ServiceBusTrigger("myqueue", Connection = "ServiceBusConnection")]
string myQueueItem,
Int32 deliveryCount,
DateTime enqueuedTimeUtc,
string messageId,
ILogger log)
{
log.LogInformation($"C# ServiceBus queue trigger function processed message: {myQueueItem}");
log.LogInformation($"EnqueuedTimeUtc={enqueuedTimeUtc}");
log.LogInformation($"DeliveryCount={deliveryCount}");
log.LogInformation($"MessageId={messageId}");
}
The following Java function uses the @ServiceBusQueueTrigger annotation from the Java functions runtime
library to describe the configuration for a Service Bus queue trigger. The function grabs the message placed on
the queue and adds it to the logs.
@FunctionName("sbprocessor")
public void serviceBusProcess(
@ServiceBusQueueTrigger(name = "msg",
queueName = "myqueuename",
connection = "myconnvarname") String message,
final ExecutionContext context
) {
context.getLogger().info(message);
}
Java functions can also be triggered when a message is added to a Service Bus topic. The following example
uses the @ServiceBusTopicTrigger annotation to describe the trigger configuration.
@FunctionName("sbtopicprocessor")
public void run(
@ServiceBusTopicTrigger(
name = "message",
topicName = "mytopicname",
subscriptionName = "mysubscription",
connection = "ServiceBusConnection"
) String message,
final ExecutionContext context
) {
context.getLogger().info(message);
}
The following example shows a Service Bus trigger binding in a function.json file and a JavaScript function that
uses the binding. The function reads message metadata and logs a Service Bus queue message.
Here's the binding data in the function.json file:
{
"bindings": [
{
"queueName": "testqueue",
"connection": "MyServiceBusConnection",
"name": "myQueueItem",
"type": "serviceBusTrigger",
"direction": "in"
}
],
"disabled": false
}
The following example shows a Service Bus trigger binding in a function.json file and a PowerShell function that
uses the binding.
Here's the binding data in the function.json file:
{
"bindings": [
{
"name": "mySbMsg",
"type": "serviceBusTrigger",
"direction": "in",
"topicName": "mytopic",
"subscriptionName": "mysubscription",
"connection": "AzureServiceBusConnectionString"
}
]
}
Here's the function that runs when a Service Bus message is sent.
The following example demonstrates how to read a Service Bus queue message via a trigger.
A Service Bus binding is defined in function.json where type is set to serviceBusTrigger .
{
"scriptFile": "__init__.py",
"bindings": [
{
"name": "msg",
"type": "serviceBusTrigger",
"direction": "in",
"queueName": "inputqueue",
"connection": "AzureServiceBusConnectionString"
}
]
}
The code in _init_.py declares a parameter as func.ServiceBusMessage , which allows you to read the queue
message in your function.
import logging
import json
result = json.dumps({
'message_id': msg.message_id,
'body': msg.get_body().decode('utf-8'),
'content_type': msg.content_type,
'expiration_time': msg.expiration_time,
'label': msg.label,
'partition_key': msg.partition_key,
'reply_to': msg.reply_to,
'reply_to_session_id': msg.reply_to_session_id,
'scheduled_enqueue_time': msg.scheduled_enqueue_time,
'session_id': msg.session_id,
'time_to_live': msg.time_to_live,
'to': msg.to,
'user_properties': msg.user_properties,
'metadata' : msg.metadata
}, default=str)
logging.info(result)
Attributes
Both in-process and isolated process C# libraries use the ServiceBusTriggerAttribute attribute to define the
function trigger. C# script instead uses a function.json configuration file.
In-process
Isolated process
C# script
The following table explains the properties you can set using this trigger attribute:
Access Access rights for the connection string. Available values are
manage and listen . The default is manage , which
indicates that the connection has the Manage
permission. If you use a connection string that does not
have the Manage permission, set accessRights to
"listen". Otherwise, the Functions runtime might fail trying to
do operations that require manage rights. In Azure
Functions version 2.x and higher, this property is not
available because the latest version of the Service Bus SDK
doesn't support manage operations.
When you're developing locally, add your application settings in the local.settings.json file in the Values
collection.
Annotations
The ServiceBusQueueTrigger annotation allows you to create a function that runs when a Service Bus queue
message is created. Configuration options available include the following properties:
P RO P ERT Y DESC RIP T IO N
name The name of the variable that represents the queue or topic
message in function code.
The ServiceBusTopicTrigger annotation allows you to designate a topic and subscription to target what data
triggers the function.
When you're developing locally, add your application settings in the local.settings.json file in the Values
collection.
See the trigger example for more detail.
Configuration
The following table explains the binding configuration properties that you set in the function.json file.
name The name of the variable that represents the queue or topic
message in function code.
accessRights Access rights for the connection string. Available values are
manage and listen . The default is manage , which
indicates that the connection has the Manage
permission. If you use a connection string that does not
have the Manage permission, set accessRights to
"listen". Otherwise, the Functions runtime might fail trying to
do operations that require manage rights. In Azure
Functions version 2.x and higher, this property is not
available because the latest version of the Service Bus SDK
doesn't support manage operations.
autoComplete Must be true for non-C# functions, which means that the
trigger should either automatically call complete after
processing, or the function code manually calls complete.
When you're developing locally, add your application settings in the local.settings.json file in the Values
collection.
See the Example section for complete examples.
Usage
The following parameter types are supported by all C# modalities and extension versions:
Messaging-specific parameter types contain additional message metadata. The specific types supported by the
Service Bus trigger depend on the Functions runtime version, the extension package version, and the C#
modality used.
Extension v5.x
Functions 2.x and higher
Functions 1.x
Use the ServiceBusReceivedMessage type to receive message metadata from Service Bus Queues and
Subscriptions. To learn more, see Messages, payloads, and serialization.
In C# class libraries, the attribute's constructor takes the name of the queue or the topic and subscription.
You can also use the ServiceBusAccountAttribute to specify the Service Bus account to use. The constructor takes
the name of an app setting that contains a Service Bus connection string. The attribute can be applied at the
parameter, method, or class level. The following example shows class level and method level:
[ServiceBusAccount("ClassLevelServiceBusAppSetting")]
public static class AzureFunctions
{
[ServiceBusAccount("MethodLevelServiceBusAppSetting")]
[FunctionName("ServiceBusQueueTriggerCSharp")]
public static void Run(
[ServiceBusTrigger("myqueue", AccessRights.Manage)]
string myQueueItem, ILogger log)
{
...
}
When the Connection property isn't defined, Functions looks for an app setting named AzureWebJobsServiceBus ,
which is the default name for the Service Bus connection string. You can also set the Connection property to
specify the name of an application setting that contains the Service Bus connection string to use.
The incoming Service Bus message is available via a ServiceBusQueueMessage or ServiceBusTopicMessage
parameter.
Access the queue or topic message by using context.bindings.<name from function.json> . The Service Bus
message is passed into the function as either a string or JSON object.
The Service Bus instance is available via the parameter configured in the function.json file's name property.
The queue message is available to the function via a parameter typed as func.ServiceBusMessage . The Service
Bus message is passed into the function as either a string or JSON object.
For a complete example, see the examples section.
Connections
The connection property is a reference to environment configuration which specifies how the app should
connect to Service Bus. It may specify:
The name of an application setting containing a connection string
The name of a shared prefix for multiple application settings, together defining an identity-based connection.
If the configured value is both an exact match for a single setting and a prefix match for other settings, the exact
match is used.
Connection string
To obtain a connection string, follow the steps shown at Get the management credentials. The connection string
must be for a Service Bus namespace, not limited to a specific queue or topic.
This connection string should be stored in an application setting with a name matching the value specified by
the connection property of the binding configuration.
If the app setting name begins with "AzureWebJobs", you can specify only the remainder of the name. For
example, if you set connection to "MyServiceBus", the Functions runtime looks for an app setting that is named
"AzureWebJobsMyServiceBus". If you leave connection empty, the Functions runtime uses the default Service
Bus connection string in the app setting that is named "AzureWebJobsServiceBus".
Identity-based connections
If you are using version 5.x or higher of the extension, instead of using a connection string with a secret, you can
have the app use an Azure Active Directory identity. To do this, you would define settings under a common
prefix which maps to the connection property in the trigger and binding configuration.
In this mode, the extension requires the following properties:
EN VIRO N M EN T VA RIA B L E
P RO P ERT Y T EM P L AT E DESC RIP T IO N EXA M P L E VA L UE
Additional properties may be set to customize the connection. See Common properties for identity-based
connections.
NOTE
When using Azure App Configuration or Key Vault to provide settings for Managed Identity connections, setting names
should use a valid key separator such as : or / in place of the __ to ensure names are resolved correctly.
For example, <CONNECTION_NAME_PREFIX>:fullyQualifiedNamespace .
When hosted in the Azure Functions service, identity-based connections use a managed identity. The system-
assigned identity is used by default, although a user-assigned identity can be specified with the credential and
clientID properties. Note that configuring a user-assigned identity with a resource ID is not supported. When
run in other contexts, such as local development, your developer identity is used instead, although this can be
customized. See Local development with identity-based connections.
Grant permission to the identity
Whatever identity is being used must have permissions to perform the intended actions. You will need to assign
a role in Azure RBAC, using either built-in or custom roles which provide those permissions.
IMPORTANT
Some permissions might be exposed by the target service that are not necessary for all contexts. Where possible, adhere
to the principle of least privilege , granting the identity only required privileges. For example, if the app only needs to
be able to read from a data source, use a role that only has permission to read. It would be inappropriate to assign a role
that also allows writing to that service, as this would be excessive permission for a read operation. Similarly, you would
want to ensure the role assignment is scoped only over the resources that need to be read.
You'll need to create a role assignment that provides access to your topics and queues at runtime. Management
roles like Owner aren't sufficient. The following table shows built-in roles that are recommended when using the
Service Bus extension in normal operation. Your application may require additional permissions based on the
code you write.
B IN DIN G T Y P E EXA M P L E B UILT - IN RO L ES
Trigger1 Azure Service Bus Data Receiver, Azure Service Bus Data
Owner
1 Fortriggering from Service Bus topics, the role assignment needs to have effective scope over the Service Bus
subscription resource. If only the topic is included, an error will occur. Some clients, such as the Azure portal,
don't expose the Service Bus subscription resource as a scope for role assignment. In such cases, the Azure CLI
may be used instead. To learn more, see Azure built-in roles for Azure Service Bus.
Poison messages
Poison message handling can't be controlled or configured in Azure Functions. Service Bus handles poison
messages itself.
PeekLock behavior
The Functions runtime receives a message in PeekLock mode. It calls Complete on the message if the function
finishes successfully, or calls Abandon if the function fails. If the function runs longer than the PeekLock timeout,
the lock is automatically renewed as long as the function is running.
The maxAutoRenewDuration is configurable in host.json, which maps to
OnMessageOptions.MaxAutoRenewDuration. The maximum allowed for this setting is 5 minutes according to
the Service Bus documentation, whereas you can increase the Functions time limit from the default of 5 minutes
to 10 minutes. For Service Bus functions you wouldn’t want to do that then, because you’d exceed the Service
Bus renewal limit.
Message metadata
Messaging-specific types let you easily retrieve metadata as properties of the object. These properties depend
on the Functions runtime version, the extension package version, and the C# modality used.
Extension v5.x
Functions 2.x and higher
Functions 1.x
Next steps
Send Azure Service Bus messages from Azure Functions (Output binding)
Azure Service Bus output binding for Azure
Functions
8/2/2022 • 17 minutes to read • Edit Online
Use Azure Service Bus output binding to send queue or topic messages.
For information on setup and configuration details, see the overview.
Example
A C# function can be created using one of the following C# modes:
In-process class library: compiled C# function that runs in the same process as the Functions runtime.
Isolated process class library: compiled C# function that runs in a process isolated from the runtime. Isolated
process is required to support C# functions running on .NET 5.0.
C# script: used primarily when creating C# functions in the Azure portal.
In-process
Isolated process
C# Script
The following example shows a C# function that sends a Service Bus queue message:
[FunctionName("ServiceBusOutput")]
[return: ServiceBus("myqueue", Connection = "ServiceBusConnection")]
public static string ServiceBusOutput([HttpTrigger] dynamic input, ILogger log)
{
log.LogInformation($"C# function processed: {input.Text}");
return input.Text;
}
The following example shows a Java function that sends a message to a Service Bus queue myqueue when
triggered by an HTTP request.
@FunctionName("httpToServiceBusQueue")
@ServiceBusQueueOutput(name = "message", queueName = "myqueue", connection = "AzureServiceBusConnection")
public String pushToQueue(
@HttpTrigger(name = "request", methods = {HttpMethod.POST}, authLevel = AuthorizationLevel.ANONYMOUS)
final String message,
@HttpOutput(name = "response") final OutputBinding<T> result ) {
result.setValue(message + " has been sent.");
return message;
}
In the Java functions runtime library, use the @QueueOutput annotation on function parameters whose value
would be written to a Service Bus queue. The parameter type should be OutputBinding<T> , where T is any native
Java type of a POJO.
Java functions can also write to a Service Bus topic. The following example uses the @ServiceBusTopicOutput
annotation to describe the configuration for the output binding.
@FunctionName("sbtopicsend")
public HttpResponseMessage run(
@HttpTrigger(name = "req", methods = {HttpMethod.GET, HttpMethod.POST}, authLevel =
AuthorizationLevel.ANONYMOUS) HttpRequestMessage<Optional<String>> request,
@ServiceBusTopicOutput(name = "message", topicName = "mytopicname", subscriptionName =
"mysubscription", connection = "ServiceBusConnection") OutputBinding<String> message,
final ExecutionContext context) {
message.setValue(name);
return request.createResponseBuilder(HttpStatus.OK).body("Hello, " + name).build();
The following example shows a Service Bus output binding in a function.json file and a JavaScript function that
uses the binding. The function uses a timer trigger to send a queue message every 15 seconds.
Here's the binding data in the function.json file:
{
"bindings": [
{
"schedule": "0/15 * * * * *",
"name": "myTimer",
"runsOnStartup": true,
"type": "timerTrigger",
"direction": "in"
},
{
"name": "outputSbQueue",
"type": "serviceBus",
"queueName": "testqueue",
"connection": "MyServiceBusConnection",
"direction": "out"
}
],
"disabled": false
}
The following example shows a Service Bus output binding in a function.json file and a PowerShell function that
uses the binding.
Here's the binding data in the function.json file:
{
"bindings": [
{
"type": "serviceBus",
"direction": "out",
"connection": "AzureServiceBusConnectionString",
"name": "outputSbMsg",
"queueName": "outqueue",
"topicName": "outtopic"
}
]
}
param($QueueItem, $TriggerMetadata)
The following example demonstrates how to write out to a Service Bus queue in Python.
A Service Bus binding definition is defined in function.json where type is set to serviceBus .
{
"scriptFile": "__init__.py",
"bindings": [
{
"authLevel": "function",
"type": "httpTrigger",
"direction": "in",
"name": "req",
"methods": [
"get",
"post"
]
},
{
"type": "http",
"direction": "out",
"name": "$return"
},
{
"type": "serviceBus",
"direction": "out",
"connection": "AzureServiceBusConnectionString",
"name": "msg",
"queueName": "outqueue"
}
]
}
In _init_.py, you can write out a message to the queue by passing a value to the set method.
import azure.functions as func
input_msg = req.params.get('message')
msg.set(input_msg)
return 'OK'
Attributes
Both in-process and isolated process C# libraries use attributes to define the output binding. C# script instead
uses a function.json configuration file.
In-process
Isolated process
C# script
QueueName Name of the queue. Set only if sending queue messages, not
for a topic.
TopicName Name of the topic. Set only if sending topic messages, not
for a queue.
Access Access rights for the connection string. Available values are
manage and listen . The default is manage , which
indicates that the connection has the Manage
permission. If you use a connection string that does not
have the Manage permission, set accessRights to
"listen". Otherwise, the Functions runtime might fail trying to
do operations that require manage rights. In Azure
Functions version 2.x and higher, this property is not
available because the latest version of the Service Bus SDK
doesn't support manage operations.
Here's an example that shows the attribute applied to the return value of the function:
[FunctionName("ServiceBusOutput")]
[return: ServiceBus("myqueue")]
public static string Run([HttpTrigger] dynamic input, ILogger log)
{
...
}
You can set the Connection property to specify the name of an app setting that contains the Service Bus
connection string to use, as shown in the following example:
[FunctionName("ServiceBusOutput")]
[return: ServiceBus("myqueue", Connection = "ServiceBusConnection")]
public static string Run([HttpTrigger] dynamic input, ILogger log)
{
...
}
Annotations
The ServiceBusQueueOutput and ServiceBusTopicOutput annotations are available to write a message as a
function output. The parameter decorated with these annotations must be declared as an OutputBinding<T>
where T is the type corresponding to the message's type.
When you're developing locally, add your application settings in the local.settings.json file in the Values
collection.
Configuration
The following table explains the binding configuration properties that you set in the function.json file and the
ServiceBus attribute.
name The name of the variable that represents the queue or topic
message in function code. Set to "$return" to reference the
function return value.
queueName Name of the queue. Set only if sending queue messages, not
for a topic.
topicName Name of the topic. Set only if sending topic messages, not
for a queue.
accessRights (v1 only) Access rights for the connection string. Available values are
manage and listen . The default is manage , which
indicates that the connection has the Manage
permission. If you use a connection string that does not
have the Manage permission, set accessRights to
"listen". Otherwise, the Functions runtime might fail trying to
do operations that require manage rights. In Azure
Functions version 2.x and higher, this property is not
available because the latest version of the Service Bus SDK
doesn't support manage operations.
When you're developing locally, add your application settings in the local.settings.json file in the Values
collection.
See the Example section for complete examples.
Usage
The following output parameter types are supported by all C# modalities and extension versions:
System.String Use when the message to write is simple text. When the
parameter value is null when the function exits, Functions
doesn't create a message.
byte[] Use for writing binary data messages. When the parameter
value is null when the function exits, Functions doesn't create
a message.
Messaging-specific parameter types contain additional message metadata. The specific types supported by the
Event Grid Output binding depend on the Functions runtime version, the extension package version, and the C#
modality used.
Extension v5.x
Functions 2.x and higher
Functions 1.x
Use the ServiceBusMessage type when sending messages with metadata. Parameters are defined as return
type attributes. Use an ICollector<T> or IAsyncCollector<T> to write multiple messages. A message is created
when you call the Add method.
When the parameter value is null when the function exits, Functions doesn't create a message.
You can also use the ServiceBusAccountAttribute to specify the Service Bus account to use. The constructor takes
the name of an app setting that contains a Service Bus connection string. The attribute can be applied at the
parameter, method, or class level. The following example shows class level and method level:
[ServiceBusAccount("ClassLevelServiceBusAppSetting")]
public static class AzureFunctions
{
[ServiceBusAccount("MethodLevelServiceBusAppSetting")]
[FunctionName("ServiceBusQueueTriggerCSharp")]
public static void Run(
[ServiceBusTrigger("myqueue", AccessRights.Manage)]
string myQueueItem, ILogger log)
{
...
}
In Azure Functions 1.x, the runtime creates the queue if it doesn't exist and you have set accessRights to
manage . In Functions version 2.x and higher, the queue or topic must already exist; if you specify a queue or
topic that doesn't exist, the function fails.
Use the Azure Service Bus SDK rather than the built-in output binding.
Access the queue or topic by using context.bindings.<name from function.json> . You can assign a string, a byte
array, or a JavaScript object (deserialized into JSON) to context.binding.<name> .
Output to the Service Bus is available via the Push-OutputBinding cmdlet where you pass arguments that match
the name designated by binding's name parameter in the function.json file.
Use the Azure Service Bus SDK rather than the built-in output binding.
For a complete example, see the examples section.
Connections
The connection property is a reference to environment configuration which specifies how the app should
connect to Service Bus. It may specify:
The name of an application setting containing a connection string
The name of a shared prefix for multiple application settings, together defining an identity-based connection.
If the configured value is both an exact match for a single setting and a prefix match for other settings, the exact
match is used.
Connection string
To obtain a connection string, follow the steps shown at Get the management credentials. The connection string
must be for a Service Bus namespace, not limited to a specific queue or topic.
This connection string should be stored in an application setting with a name matching the value specified by
the connection property of the binding configuration.
If the app setting name begins with "AzureWebJobs", you can specify only the remainder of the name. For
example, if you set connection to "MyServiceBus", the Functions runtime looks for an app setting that is named
"AzureWebJobsMyServiceBus". If you leave connection empty, the Functions runtime uses the default Service
Bus connection string in the app setting that is named "AzureWebJobsServiceBus".
Identity-based connections
If you are using version 5.x or higher of the extension, instead of using a connection string with a secret, you can
have the app use an Azure Active Directory identity. To do this, you would define settings under a common
prefix which maps to the connection property in the trigger and binding configuration.
In this mode, the extension requires the following properties:
EN VIRO N M EN T VA RIA B L E
P RO P ERT Y T EM P L AT E DESC RIP T IO N EXA M P L E VA L UE
Additional properties may be set to customize the connection. See Common properties for identity-based
connections.
NOTE
When using Azure App Configuration or Key Vault to provide settings for Managed Identity connections, setting names
should use a valid key separator such as : or / in place of the __ to ensure names are resolved correctly.
For example, <CONNECTION_NAME_PREFIX>:fullyQualifiedNamespace .
When hosted in the Azure Functions service, identity-based connections use a managed identity. The system-
assigned identity is used by default, although a user-assigned identity can be specified with the credential and
clientID properties. Note that configuring a user-assigned identity with a resource ID is not supported. When
run in other contexts, such as local development, your developer identity is used instead, although this can be
customized. See Local development with identity-based connections.
Grant permission to the identity
Whatever identity is being used must have permissions to perform the intended actions. You will need to assign
a role in Azure RBAC, using either built-in or custom roles which provide those permissions.
IMPORTANT
Some permissions might be exposed by the target service that are not necessary for all contexts. Where possible, adhere
to the principle of least privilege , granting the identity only required privileges. For example, if the app only needs to
be able to read from a data source, use a role that only has permission to read. It would be inappropriate to assign a role
that also allows writing to that service, as this would be excessive permission for a read operation. Similarly, you would
want to ensure the role assignment is scoped only over the resources that need to be read.
You'll need to create a role assignment that provides access to your topics and queues at runtime. Management
roles like Owner aren't sufficient. The following table shows built-in roles that are recommended when using the
Service Bus extension in normal operation. Your application may require additional permissions based on the
code you write.
Trigger1 Azure Service Bus Data Receiver, Azure Service Bus Data
Owner
1
1 Fortriggering from Service Bus topics, the role assignment needs to have effective scope over the Service Bus
subscription resource. If only the topic is included, an error will occur. Some clients, such as the Azure portal,
don't expose the Service Bus subscription resource as a scope for role assignment. In such cases, the Azure CLI
may be used instead. To learn more, see Azure built-in roles for Azure Service Bus.
Next steps
Run a function when a Service Bus queue or topic message is created (Trigger)
SignalR Service bindings for Azure Functions
8/2/2022 • 2 minutes to read • Edit Online
This set of articles explains how to authenticate and send real-time messages to clients connected to Azure
SignalR Service by using SignalR Service bindings in Azure Functions. Azure Functions runtime version 2.x and
higher supports input and output bindings for SignalR Service.
A C T IO N TYPE
Return the service endpoint URL and access token Input binding
Install extension
The extension NuGet package you install depends on the C# mode you're using in your function app:
In-process
Isolated process
C# script
Functions execute in the same process as the Functions host. To learn more, see Develop C# class library
functions using Azure Functions.
Add the extension to your project by installing this NuGet package.
Install bundle
The SignalR Service extension is part of an extension bundle, which is specified in your host.json project file.
When you create a project that targets version 3.x or later, you should already have this bundle installed. To learn
more, see extension bundle.
Add dependency
To use the SignalR Service annotations in Java functions, you need to add a dependency to the azure-functions-
java-library-signalr artifact (version 1.0 or higher) to your pom.xml file.
<dependency>
<groupId>com.microsoft.azure.functions</groupId>
<artifactId>azure-functions-java-library-signalr</artifactId>
<version>1.0.0</version>
</dependency>
Next steps
Handle messages from SignalR Service (Trigger binding)
Return the service endpoint URL and access token (Input binding)
Send SignalR Service messages (Output binding)
SignalR Service trigger binding for Azure Functions
8/2/2022 • 9 minutes to read • Edit Online
Use the SignalR trigger binding to respond to messages sent from Azure SignalR Service. When function is
triggered, messages passed to the function is parsed as a json object.
In SignalR Service serverless mode, SignalR Service uses the Upstream feature to send messages from client to
Function App. And Function App uses SignalR Service trigger binding to handle these messages. The general
architecture is shown below:
For information on setup and configuration details, see the overview.
Example
A C# function can be created using one of the following C# modes:
In-process class library: compiled C# function that runs in the same process as the Functions runtime.
Isolated process class library: compiled C# function that runs in a process isolated from the runtime. Isolated
process is required to support C# functions running on .NET 5.0.
C# script: used primarily when creating C# functions in the Azure portal.
In-process
In-process
Isolated process
C# Script
SignalR Service trigger binding for C# has two programming models. Class based model and traditional model.
Class based model provides a consistent SignalR server-side programming experience. Traditional model
provides more flexibility and is similar to other function bindings.
With class-based model
See Class based model for details.
[FunctionName("SignalRTest")]
public static async Task Run([SignalRTrigger("SignalRTest", "messages", "SendMessage", parameterNames: new
string[] {"message"})]InvocationContext invocationContext, string message, ILogger logger)
{
logger.LogInformation($"Receive {message} from {invocationContext.ConnectionId}.");
}
Because it can be hard to use ParameterNames in the trigger, the following example shows you how to use the
SignalRParameter attribute to define the message attribute.
[FunctionName("SignalRTest")]
public static async Task Run([SignalRTrigger("SignalRTest", "messages", "SendMessage")]InvocationContext
invocationContext, [SignalRParameter]string message, ILogger logger)
{
logger.LogInformation($"Receive {message} from {invocationContext.ConnectionId}.");
}
{
"type": "signalRTrigger",
"name": "invocation",
"hubName": "SignalRTest",
"category": "messages",
"event": "SendMessage",
"parameterNames": [
"message"
],
"direction": "in"
}
Here's the JavaScript code:
import logging
import json
import azure.functions as func
Attributes
Both in-process and isolated process C# libraries use the SignalRTrigger attribute to define the function. C#
script instead uses a function.json configuration file.
In-process
Isolated process
C# script
HubName This value must be set to the name of the SignalR hub for
the function to be triggered.
Categor y This value must be set as the category of messages for the
function to be triggered. The category can be one of the
following values:
connections : Including connected and disconnected
events
messages : Including all other events except those in
connections category
Event This value must be set as the event of messages for the
function to be triggered. For messages category, event is the
target in invocation message that clients send. For
connections category, only connected and disconnected is
used.
ConnectionStringSetting The name of the app setting that contains the SignalR
Service connection string, which defaults to
AzureSignalRConnectionString .
Annotations
There isn't currently a supported Java annotation for a SignalR trigger.
Configuration
The following table explains the binding configuration properties that you set in the function.json file.
hubName This value must be set to the name of the SignalR hub for
the function to be triggered.
categor y This value must be set as the category of messages for the
function to be triggered. The category can be one of the
following values:
connections : Including connected and disconnected
events
messages : Including all other events except those in
connections category
event This value must be set as the event of messages for the
function to be triggered. For messages category, event is the
target in invocation message that clients send. For
connections category, only connected and disconnected is
used.
connectionStringSetting The name of the app setting that contains the SignalR
Service connection string, which defaults to
AzureSignalRConnectionString .
Usage
Payloads
The trigger input type is declared as either InvocationContext or a custom type. If you choose
InvocationContext you get full access to the request content. For a custom type, the runtime tries to parse the
JSON request body to set the object properties.
InvocationContext
InvocationContext contains all the content in the message send from aa SignalR service, which includes the
following properties:
P RO P ERT Y DESC RIP T IO N
UserId The user identity of the client which sends the message.
Using ParameterNames
The property ParameterNames in SignalRTrigger lets you bind arguments of invocation messages to the
parameters of functions. You can use the name you defined as part of binding expressions in other binding or as
parameters in your code. That gives you a more convenient way to access arguments of InvocationContext .
Say you have a JavaScript SignalR client trying to invoke method broadcast in Azure Function with two
arguments message1 , message2 .
After you set parameterNames , the names you defined correspond to the arguments sent on the client side.
Then, the arg1 will contain the content of message1 , and arg2 will contain the content of message2 .
ParameterNames considerations
For the parameter binding, the order matters. If you're using ParameterNames , the order in ParameterNames
matches the order of the arguments you invoke in the client. If you're using attribute [SignalRParameter] in C#,
the order of arguments in Azure Function methods matches the order of arguments in clients.
ParameterNames and attribute [SignalRParameter] cannot be used at the same time, or you will get an
exception.
SignalR Service integration
SignalR Service needs a URL to access Function App when you're using SignalR Service trigger binding. The URL
should be configured in Upstream Settings on the SignalR Service side.
When using SignalR Service trigger, the URL can be simple and formatted as shown below:
<Function_App_URL>/runtime/webhooks/signalr?code=<API_KEY>
The Function_App_URL can be found on Function App's Overview page and The API_KEY is generated by Azure
Function. You can get the API_KEY from signalr_extension in the App keys blade of Function App.
If you want to use more than one Function App together with one SignalR Service, upstream can also support
complex routing rules. Find more details at Upstream settings.
Step by step sample
You can follow the sample in GitHub to deploy a chat room on Function App with SignalR Service trigger
binding and upstream feature: Bidirectional chat room sample
Next steps
Azure Functions development and configuration with Azure SignalR Service
SignalR Service Trigger binding sample
SignalR Service Trigger binding sample in isolated process
SignalR Service input binding for Azure Functions
8/2/2022 • 7 minutes to read • Edit Online
Before a client can connect to Azure SignalR Service, it must retrieve the service endpoint URL and a valid access
token. The SignalRConnectionInfo input binding produces the SignalR Service endpoint URL and a valid token
that are used to connect to the service. Because the token is time-limited and can be used to authenticate a
specific user to a connection, you should not cache the token or share it between clients. An HTTP trigger using
this binding can be used by clients to retrieve the connection information.
For more information on how this binding is used to create a "negotiate" function that can be consumed by a
SignalR client SDK, see the Azure Functions development and configuration article in the SignalR Service
concepts documentation.
For information on setup and configuration details, see the overview.
Example
A C# function can be created using one of the following C# modes:
In-process class library: compiled C# function that runs in the same process as the Functions runtime.
Isolated process class library: compiled C# function that runs in a process isolated from the runtime. Isolated
process is required to support C# functions running on .NET 5.0.
C# script: used primarily when creating C# functions in the Azure portal.
In-process
Isolated process
C# Script
The following example shows a C# function that acquires SignalR connection information using the input
binding and returns it over HTTP.
[FunctionName("negotiate")]
public static SignalRConnectionInfo Negotiate(
[HttpTrigger(AuthorizationLevel.Anonymous)]HttpRequest req,
[SignalRConnectionInfo(HubName = "chat")]SignalRConnectionInfo connectionInfo)
{
return connectionInfo;
}
The following example shows a SignalR connection info input binding in a function.json file and a function that
uses the binding to return the connection information.
Here's binding data for the example in the function.json file:
{
"type": "signalRConnectionInfo",
"name": "connectionInfo",
"hubName": "chat",
"connectionStringSetting": "<name of setting containing SignalR Service connection string>",
"direction": "in"
}
Here's the JavaScript code:
The following example shows a Java function that acquires SignalR connection information using the input
binding and returns it over HTTP.
@FunctionName("negotiate")
public SignalRConnectionInfo negotiate(
@HttpTrigger(
name = "req",
methods = { HttpMethod.POST },
authLevel = AuthorizationLevel.ANONYMOUS) HttpRequestMessage<Optional<String>> req,
@SignalRConnectionInfoInput(
name = "connectionInfo",
hubName = "chat") SignalRConnectionInfo connectionInfo) {
return connectionInfo;
}
Usage
Authenticated tokens
When the function is triggered by an authenticated client, you can add a user ID claim to the generated token.
You can easily add authentication to a function app using App Service Authentication.
App Service authentication sets HTTP headers named x-ms-client-principal-id and
x-ms-client-principal-name that contain the authenticated user's client principal ID and name, respectively.
In-process
Isolated process
C# Script
You can set the property of the binding to the value from either header using a binding expression:
UserId
{headers.x-ms-client-principal-id} or {headers.x-ms-client-principal-name} .
[FunctionName("negotiate")]
public static SignalRConnectionInfo Negotiate(
[HttpTrigger(AuthorizationLevel.Anonymous)]HttpRequest req,
[SignalRConnectionInfo
(HubName = "chat", UserId = "{headers.x-ms-client-principal-id}")]
SignalRConnectionInfo connectionInfo)
{
// connectionInfo contains an access key token with a name identifier claim set to the authenticated
user
return connectionInfo;
}
{
"type": "signalRConnectionInfo",
"name": "connectionInfo",
"hubName": "chat",
"userId": "{headers.x-ms-client-principal-id}",
"connectionStringSetting": "<name of setting containing SignalR Service connection string>",
"direction": "in"
}
You can set the property of the binding to the value from either header using a binding expression:
userId
{headers.x-ms-client-principal-id} or {headers.x-ms-client-principal-name} .
@FunctionName("negotiate")
public SignalRConnectionInfo negotiate(
@HttpTrigger(
name = "req",
methods = { HttpMethod.POST },
authLevel = AuthorizationLevel.ANONYMOUS) HttpRequestMessage<Optional<String>> req,
@SignalRConnectionInfoInput(
name = "connectionInfo",
hubName = "chat",
userId = "{headers.x-ms-client-principal-id}") SignalRConnectionInfo connectionInfo) {
return connectionInfo;
}
Attributes
Both in-process and isolated process C# libraries use attribute to define the function. C# script instead uses a
function.json configuration file.
In-process
Isolated process
C# Script
This value must be set to the name of the SignalR hub for
which the connection information is generated.
ConnectionStringSetting The name of the app setting that contains the SignalR
Service connection string, which defaults to
AzureSignalRConnectionString .
Annotations
The following table explains the supported settings for the SignalRConnectionInfoInput annotation.
hubName This value must be set to the name of the SignalR hub for
which the connection information is generated.
connectionStringSetting The name of the app setting that contains the SignalR
Service connection string, which defaults to
AzureSignalRConnectionString .
Configuration
The following table explains the binding configuration properties that you set in the function.json file.
hubName This value must be set to the name of the SignalR hub for
which the connection information is generated.
connectionStringSetting The name of the app setting that contains the SignalR
Service connection string, which defaults to
AzureSignalRConnectionString .
Next steps
Handle messages from SignalR Service (Trigger binding)
Send SignalR Service messages (Output binding)
SignalR Service output binding for Azure Functions
8/2/2022 • 10 minutes to read • Edit Online
Use the SignalR output binding to send one or more messages using Azure SignalR Service. You can broadcast a
message to:
All connected clients
Connected clients authenticated to a specific user
The output binding also allows you to manage groups.
For information on setup and configuration details, see the overview.
Example
A C# function can be created using one of the following C# modes:
In-process class library: compiled C# function that runs in the same process as the Functions runtime.
Isolated process class library: compiled C# function that runs in a process isolated from the runtime. Isolated
process is required to support C# functions running on .NET 5.0.
C# script: used primarily when creating C# functions in the Azure portal.
Broadcast to all clients
In-process
Isolated process
C# Script
The following example shows a function that sends a message using the output binding to all connected clients.
The target is the name of the method to be invoked on each client. The Arguments property is an array of zero
or more objects to be passed to the client method.
[FunctionName("SendMessage")]
public static Task SendMessage(
[HttpTrigger(AuthorizationLevel.Anonymous, "post")]object message,
[SignalR(HubName = "chat")]IAsyncCollector<SignalRMessage> signalRMessages)
{
return signalRMessages.AddAsync(
new SignalRMessage
{
Target = "newMessage",
Arguments = new [] { message }
});
}
@FunctionName("sendMessage")
@SignalROutput(name = "$return", hubName = "chat")
public SignalRMessage sendMessage(
@HttpTrigger(
name = "req",
methods = { HttpMethod.POST },
authLevel = AuthorizationLevel.ANONYMOUS) HttpRequestMessage<Object> req) {
Send to a user
You can send a message only to connections that have been authenticated to a user by setting the user ID in the
SignalR message.
In-process
Isolated process
C# Script
[FunctionName("SendMessage")]
public static Task SendMessage(
[HttpTrigger(AuthorizationLevel.Anonymous, "post")]object message,
[SignalR(HubName = "chat")]IAsyncCollector<SignalRMessage> signalRMessages)
{
return signalRMessages.AddAsync(
new SignalRMessage
{
// the message will only be sent to this user ID
UserId = "userId1",
Target = "newMessage",
Arguments = new [] { message }
});
}
Send to a user
You can send a message only to connections that have been authenticated to a user by setting the user ID in the
SignalR message.
Example function.json:
{
"type": "signalR",
"name": "outRMessages",
"hubName": "<hub_name>",
"connectionStringSetting": "<name of setting containing SignalR Service connection string>",
"direction": "out"
}
Send to a user
You can send a message only to connections that have been authenticated to a user by setting the user ID in the
SignalR message.
@FunctionName("sendMessage")
@SignalROutput(name = "$return", hubName = "chat")
public SignalRMessage sendMessage(
@HttpTrigger(
name = "req",
methods = { HttpMethod.POST },
authLevel = AuthorizationLevel.ANONYMOUS) HttpRequestMessage<Object> req) {
Send to a group
You can send a message only to connections that have been added to a group by setting the group name in the
SignalR message.
In-process
Isolated process
C# Script
[FunctionName("SendMessage")]
public static Task SendMessage(
[HttpTrigger(AuthorizationLevel.Anonymous, "post")]object message,
[SignalR(HubName = "chat")]IAsyncCollector<SignalRMessage> signalRMessages)
{
return signalRMessages.AddAsync(
new SignalRMessage
{
// the message will be sent to the group with this name
GroupName = "myGroup",
Target = "newMessage",
Arguments = new [] { message }
});
}
Send to a group
You can send a message only to connections that have been added to a group by setting the group name in the
SignalR message.
Example function.json:
{
"type": "signalR",
"name": "signalRMessages",
"hubName": "<hub_name>",
"connectionStringSetting": "<name of setting containing SignalR Service connection string>",
"direction": "out"
}
Send to a group
You can send a message only to connections that have been added to a group by setting the group name in the
SignalR message.
@FunctionName("sendMessage")
@SignalROutput(name = "$return", hubName = "chat")
public SignalRMessage sendMessage(
@HttpTrigger(
name = "req",
methods = { HttpMethod.POST },
authLevel = AuthorizationLevel.ANONYMOUS) HttpRequestMessage<Object> req) {
Group management
SignalR Service allows users or connections to be added to groups. Messages can then be sent to a group. You
can use the SignalR output binding to manage groups.
In-process
Isolated process
C# Script
Specify GroupAction to add or remove a member. The following example adds a user to a group.
[FunctionName("addToGroup")]
public static Task AddToGroup(
[HttpTrigger(AuthorizationLevel.Anonymous, "post")]HttpRequest req,
ClaimsPrincipal claimsPrincipal,
[SignalR(HubName = "chat")]
IAsyncCollector<SignalRGroupAction> signalRGroupActions)
{
var userIdClaim = claimsPrincipal.FindFirst(ClaimTypes.NameIdentifier);
return signalRGroupActions.AddAsync(
new SignalRGroupAction
{
UserId = userIdClaim.Value,
GroupName = "myGroup",
Action = GroupAction.Add
});
}
NOTE
In order to get the ClaimsPrincipal correctly bound, you must have configured the authentication settings in Azure
Functions.
Group management
SignalR Service allows users or connections to be added to groups. Messages can then be sent to a group. You
can use the SignalR output binding to manage groups.
Example function.json that defines the output binding:
{
"type": "signalR",
"name": "signalRGroupActions",
"connectionStringSetting": "<name of setting containing SignalR Service connection string>",
"hubName": "chat",
"direction": "out"
}
Group management
SignalR Service allows users or connections to be added to groups. Messages can then be sent to a group. You
can use the SignalR output binding to manage groups.
The following example adds a user to a group.
@FunctionName("addToGroup")
@SignalROutput(name = "$return", hubName = "chat")
public SignalRGroupAction addToGroup(
@HttpTrigger(
name = "req",
methods = { HttpMethod.POST },
authLevel = AuthorizationLevel.ANONYMOUS) HttpRequestMessage<Object> req,
@BindingName("userId") String userId) {
@FunctionName("removeFromGroup")
@SignalROutput(name = "$return", hubName = "chat")
public SignalRGroupAction removeFromGroup(
@HttpTrigger(
name = "req",
methods = { HttpMethod.POST },
authLevel = AuthorizationLevel.ANONYMOUS) HttpRequestMessage<Object> req,
@BindingName("userId") String userId) {
Attributes
Both in-process and isolated process C# libraries use attribute to define the function. C# script instead uses a
function.json configuration file.
In-process
Isolated process
C# Script
The following table explains the properties of the SignalR output attribute.
HubName This value must be set to the name of the SignalR hub for
which the connection information is generated.
ConnectionStringSetting The name of the app setting that contains the SignalR
Service connection string, which defaults to
AzureSignalRConnectionString .
Annotations
The following table explains the supported settings for the SignalROutput annotation.
hubName This value must be set to the name of the SignalR hub for
which the connection information is generated.
connectionStringSetting The name of the app setting that contains the SignalR
Service connection string, which defaults to
AzureSignalRConnectionString .
Configuration
The following table explains the binding configuration properties that you set in the function.json file.
hubName This value must be set to the name of the SignalR hub for
which the connection information is generated.
connectionStringSetting The name of the app setting that contains the SignalR
Service connection string, which defaults to
AzureSignalRConnectionString .
When you're developing locally, add your application settings in the local.settings.json file in the Values
collection.
Next steps
Handle messages from SignalR Service (Trigger binding)
Return the service endpoint URL and access token (Input binding)
Azure Tables bindings for Azure Functions
8/2/2022 • 4 minutes to read • Edit Online
Azure Functions integrates with Azure Tables via triggers and bindings. Integrating with Azure Tables allows you
to build functions that read and write data using the Tables API for Azure Storage and Cosmos DB.
NOTE
The Table bindings have historically only supported Azure Storage. Support for Cosmos DB is currently in preview. See
Table API extension (preview).
A C T IO N TYPE
Install extension
The extension NuGet package you install depends on the C# mode you're using in your function app:
In-process
Isolated process
C# script
Functions execute in the same process as the Functions host. To learn more, see Develop C# class library
functions using Azure Functions.
The process for installing the extension varies depending on the extension version:
This version introduces the ability to connect using an identity instead of a secret. For a tutorial on configuring
your function apps with managed identities, see the creating a function app with identity-based connections
tutorial.
This version allows you to bind to types from Azure.Data.Tables. It also introduces the ability to use Cosmos DB
Table APIs.
This extension is available by installing the Microsoft.Azure.WebJobs.Extensions.Tables NuGet package into a
project using version 5.x or higher of the extensions for blobs and queues.
Using the .NET CLI:
# Install the Tables API extension
dotnet add package Microsoft.Azure.WebJobs.Extensions.Tables --version 1.0.0
# Update the combined Azure Storage extension (to a version which no longer includes Tables)
dotnet add package Microsoft.Azure.WebJobs.Extensions.Storage --version 5.0.0
NOTE
Blob Storage, Queue Storage, and Table Storage now use separate extensions and are referenced individually. For example,
to use the triggers and bindings for all three services in your .NET in-process app, you should add the following packages
to your project:
Microsoft.Azure.WebJobs.Extensions.Storage.Blobs
Microsoft.Azure.WebJobs.Extensions.Storage.Queues
Microsoft.Azure.WebJobs.Extensions.Tables
Previously, the extensions shipped together as Microsoft.Azure.WebJobs.Extensions.Storage, version 4.x. This same
package also has a 5.x version, which references the split packages for blobs and queues only. When upgrading your
package references from older versions, you may therefore need to additionally reference the new
Microsoft.Azure.WebJobs.Extensions.Tables NuGet package. Also, when referencing these newer split packages, make sure
you are not referencing an older version of the combined storage package, as this will result in conflicts from two
definitions of the same bindings.
Install bundle
The Azure Tables bindings are part of an extension bundle, which is specified in your host.json project file. You
may need to modify this bundle to change the version of the bindings, or if bundles aren't already installed. To
learn more, see extension bundle.
Bundle v3.x
Bundle v2.x
Functions 1.x
Version 3.x of the extension bundle doesn't currently include the Azure Tables bindings. You need to instead use
version 2.x of the extension bundle.
Next steps
Read table data when a function runs
Write table data from a function
Azure Tables input bindings for Azure Functions
8/2/2022 • 23 minutes to read • Edit Online
Use the Azure Tables input binding to read a table in an Azure Storage or Cosmos DB account.
For information on setup and configuration details, see the overview.
Example
The usage of the binding depends on the extension package version and the C# modality used in your function
app, which can be one of the following:
In-process
Isolated process
C# script
An in-process class library is a compiled C# function runs in the same process as the Functions runtime.
Choose a version to see examples for the mode and version.
Combined Azure Storage extension
Table API extension
Functions 1.x
The following example shows a C# function that reads a single table row. For every message sent to the queue,
the function will be triggered.
The row key value {queueTrigger} binds the row key to the message metadata, which is the message string.
[FunctionName("TableInput")]
public static void TableInput(
[QueueTrigger("table-items")] string input,
[Table("MyTable", "MyPartition", "{queueTrigger}")] MyPoco poco,
ILogger log)
{
log.LogInformation($"PK={poco.PartitionKey}, RK={poco.RowKey}, Text={poco.Text}");
}
}
Use a CloudTable method parameter to read the table by using the Azure Storage SDK. Here's an example of a
function that queries an Azure Functions log table:
using Microsoft.Azure.WebJobs;
using Microsoft.Azure.WebJobs.Host;
using Microsoft.Extensions.Logging;
using Microsoft.Azure.Cosmos.Table;
using System;
using System.Threading.Tasks;
namespace FunctionAppCloudTable2
{
public class LogEntity : TableEntity
{
public string OriginalName { get; set; }
}
public static class CloudTableDemo
{
[FunctionName("CloudTableDemo")]
public static async Task Run(
[TimerTrigger("0 */1 * * * *")] TimerInfo myTimer,
[Table("AzureWebJobsHostLogscommon")] CloudTable cloudTable,
ILogger log)
{
log.LogInformation($"C# Timer trigger function executed at: {DateTime.Now}");
For more information about how to use CloudTable, see Get started with Azure Table storage.
If you try to bind to CloudTable and get an error message, make sure that you have a reference to the correct
Storage SDK version.
The following example shows an HTTP triggered function which returns a list of person objects who are in a
specified partition in Table storage. In the example, the partition key is extracted from the http route, and the
tableName and connection are from the function settings.
public class Person {
private String PartitionKey;
private String RowKey;
private String Name;
@FunctionName("getPersonsByPartitionKey")
public Person[] get(
@HttpTrigger(name = "getPersons", methods = {HttpMethod.GET}, authLevel =
AuthorizationLevel.FUNCTION, route="persons/{partitionKey}") HttpRequestMessage<Optional<String>> request,
@BindingName("partitionKey") String partitionKey,
@TableInput(name="persons", partitionKey="{partitionKey}", tableName="%MyTableName%",
connection="MyConnectionString") Person[] persons,
final ExecutionContext context) {
context.getLogger().info("Got query for person related to persons with partition key: " + partitionKey);
return persons;
}
The TableInput annotation can also extract the bindings from the json body of the request, like the following
example shows.
@FunctionName("GetPersonsByKeysFromRequest")
public HttpResponseMessage get(
@HttpTrigger(name = "getPerson", methods = {HttpMethod.GET}, authLevel =
AuthorizationLevel.FUNCTION, route="query") HttpRequestMessage<Optional<String>> request,
@TableInput(name="persons", partitionKey="{partitionKey}", rowKey = "{rowKey}",
tableName="%MyTableName%", connection="MyConnectionString") Person person,
final ExecutionContext context) {
if (person == null) {
return request.createResponseBuilder(HttpStatus.NOT_FOUND)
.body("Person not found.")
.build();
}
return request.createResponseBuilder(HttpStatus.OK)
.header("Content-Type", "application/json")
.body(person)
.build();
}
The following example uses a filter to query for persons with a specific name in an Azure Table, and limits the
number of possible matches to 10 results.
@FunctionName("getPersonsByName")
public Person[] get(
@HttpTrigger(name = "getPersons", methods = {HttpMethod.GET}, authLevel =
AuthorizationLevel.FUNCTION, route="filter/{name}") HttpRequestMessage<Optional<String>> request,
@BindingName("name") String name,
@TableInput(name="persons", filter="Name eq '{name}'", take = "10", tableName="%MyTableName%",
connection="MyConnectionString") Person[] persons,
final ExecutionContext context) {
context.getLogger().info("Got query for person related to persons with name: " + name);
return persons;
}
The following example shows a table input binding in a function.json file and JavaScript code that uses the
binding. The function uses a queue trigger to read a single table row.
The function.json file specifies a partitionKey and a rowKey . The rowKey value "{queueTrigger}" indicates that
the row key comes from the queue message string.
{
"bindings": [
{
"queueName": "myqueue-items",
"connection": "MyStorageConnectionAppSetting",
"name": "myQueueItem",
"type": "queueTrigger",
"direction": "in"
},
{
"name": "personEntity",
"type": "table",
"tableName": "Person",
"partitionKey": "Test",
"rowKey": "{queueTrigger}",
"connection": "MyStorageConnectionAppSetting",
"direction": "in"
}
],
"disabled": false
}
The following function uses a queue trigger to read a single table row as input to a function.
In this example, the binding configuration specifies an explicit value for the table's partitionKey and uses an
expression to pass to the rowKey . The rowKey expression, {queueTrigger} , indicates that the row key comes
from the queue message string.
Binding configuration in function.json:
{
"bindings": [
{
"queueName": "myqueue-items",
"connection": "MyStorageConnectionAppSetting",
"name": "MyQueueItem",
"type": "queueTrigger",
"direction": "in"
},
{
"name": "PersonEntity",
"type": "table",
"tableName": "Person",
"partitionKey": "Test",
"rowKey": "{queueTrigger}",
"connection": "MyStorageConnectionAppSetting",
"direction": "in"
}
],
"disabled": false
}
The following function uses an HTTP trigger to read a single table row as input to a function.
In this example, binding configuration specifies an explicit value for the table's partitionKey and uses an
expression to pass to the rowKey . The rowKey expression, {id} indicates that the row key comes from the
{id} part of the route in the request.
import json
message = json.loads(messageJSON)
return func.HttpResponse(f"Table row: {messageJSON}")
With this simple binding, you can't programmatically handle a case in which no row that has a row key ID is
found. For more fine-grained data selection, use the storage SDK.
Attributes
Both in-process and isolated process C# libraries use attributes to define the function. C# script instead uses a
function.json configuration file.
In-process
Isolated process
C# script
Par titionKey Optional. The partition key of the table entity to read. See
the usage section for guidance on how to use this property.
RowKey Optional. The row key of a single table entity to read. Can't
be used with Take or Filter .
The attribute's constructor takes the table name, partition key, and row key, as shown in the following example:
[FunctionName("TableInput")]
public static void Run(
[QueueTrigger("table-items")] string input,
[Table("MyTable", "Http", "{queueTrigger}")] MyPoco poco,
ILogger log)
{
...
}
You can set the Connection property to specify the connection to the table service, as shown in the following
example:
[FunctionName("TableInput")]
public static void Run(
[QueueTrigger("table-items")] string input,
[Table("MyTable", "Http", "{queueTrigger}", Connection = "StorageConnectionAppSetting")] MyPoco poco,
ILogger log)
{
...
}
While the attribute takes a Connection property, you can also use the StorageAccountAttribute to specify a
storage account connection. You can do this when you need to use a different storage account than other
functions in the library. The constructor takes the name of an app setting that contains a storage connection
string. The attribute can be applied at the parameter, method, or class level. The following example shows class
level and method level:
[StorageAccount("ClassLevelStorageAppSetting")]
public static class AzureFunctions
{
[FunctionName("StorageTrigger")]
[StorageAccount("FunctionLevelStorageAppSetting")]
public static void Run( //...
{
...
}
Annotations
In the Java functions runtime library, use the @TableInput annotation on parameters whose value would come
from Table storage. This annotation can be used with native Java types, POJOs, or nullable values using
Optional<T> . This annotation supports the following elements:
EL EM EN T DESC RIP T IO N
Par titionKey Optional. The partition key of the table entity to read.
Configuration
The following table explains the binding configuration properties that you set in the function.json file.
name The name of the variable that represents the table or entity
in function code.
par titionKey Optional. The partition key of the table entity to read.
rowKey Optional. The row key of the table entity to read. Can't be
used with take or filter .
When you're developing locally, add your application settings in the local.settings.json file in the Values
collection.
Connections
The connection property is a reference to environment configuration that specifies how the app should connect
to your table service. It may specify:
The name of an application setting containing a connection string
The name of a shared prefix for multiple application settings, together defining an identity-based connection
If the configured value is both an exact match for a single setting and a prefix match for other settings, the exact
match is used.
Connection string
To obtain a connection string for tables in Azure Storage, follow the steps shown at Manage storage account
access keys. To obtain a connection string for tables in Cosmos DB (when using the Tables API extension), follow
the steps shown at the Table API FAQ.
This connection string should be stored in an application setting with a name matching the value specified by
the connection property of the binding configuration.
If the app setting name begins with "AzureWebJobs", you can specify only the remainder of the name here. For
example, if you set connection to "MyStorage", the Functions runtime looks for an app setting that is named
"AzureWebJobsMyStorage". If you leave connection empty, the Functions runtime uses the default Storage
connection string in the app setting that is named AzureWebJobsStorage .
Identity-based connections
If you are using the Tables API extension, instead of using a connection string with a secret, you can have the app
use an Azure Active Directory identity. This only applies when accessing tables in Azure Storage. To do this, you
would define settings under a common prefix which maps to the connection property in the trigger and
binding configuration.
If you are setting connection to "AzureWebJobsStorage", see Connecting to host storage with an identity. For all
other connections, the extension requires the following properties:
EN VIRO N M EN T VA RIA B L E
P RO P ERT Y T EM P L AT E DESC RIP T IO N EXA M P L E VA L UE
1 <CONNECTION_NAME_PREFIX>__serviceUri can be used as an alias. If both forms are provided, the tableServiceUri
form will be used. The serviceUri form cannot be used when the overall connection configuration is to be used
across blobs, queues, and/or tables.
Additional properties may be set to customize the connection. See Common properties for identity-based
connections.
The serviceUri form cannot be used when the overall connection configuration is to be used across blobs,
queues, and/or tables in Azure Storage. The URI itself can only designate the table service. As an alternative, you
can provide a URI specifically for each service under the same prefix, allowing a single connection to be used.
When hosted in the Azure Functions service, identity-based connections use a managed identity. The system-
assigned identity is used by default, although a user-assigned identity can be specified with the credential and
clientID properties. Note that configuring a user-assigned identity with a resource ID is not supported. When
run in other contexts, such as local development, your developer identity is used instead, although this can be
customized. See Local development with identity-based connections.
Grant permission to the identity
Whatever identity is being used must have permissions to perform the intended actions. You will need to assign
a role in Azure RBAC, using either built-in or custom roles which provide those permissions.
IMPORTANT
Some permissions might be exposed by the target service that are not necessary for all contexts. Where possible, adhere
to the principle of least privilege , granting the identity only required privileges. For example, if the app only needs to
be able to read from a data source, use a role that only has permission to read. It would be inappropriate to assign a role
that also allows writing to that service, as this would be excessive permission for a read operation. Similarly, you would
want to ensure the role assignment is scoped only over the resources that need to be read.
You will need to create a role assignment that provides access to your Azure Storage table service at runtime.
Management roles like Owner are not sufficient. The following table shows built-in roles that are recommended
when using the Table API extension against Azure Storage in normal operation. Your application may require
additional permissions based on the code you write.
1 If your
app is instead connecting to tables in Cosmos DB, using an identity is not supported, and the
connection must use a connection string.
Usage
The usage of the binding depends on the extension package version, and the C# modality used in your function
app, which can be one of the following:
In-process
Isolated process
C# script
An in-process class library is a compiled C# function that runs in the same process as the Functions runtime.
Choose a version to see usage details for the mode and version.
To return a specific entity by key, use a binding parameter that derives from TableEntity.
To execute queries that return multiple entities, bind to a CloudTable object. You can then use this object to create
and execute queries against the bound table. Note that CloudTable and related APIs belong to the
Microsoft.Azure.Cosmos.Table namespace.
The TableInput attribute gives you access to the table row that triggered the function.
Set the filter and take properties. Don't set partitionKey or rowKey . Access the input table entity (or
entities) using context.bindings.<BINDING_NAME> . The deserialized objects have RowKey and PartitionKey
properties.
Data is passed to the input parameter as specified by the name key in the function.json file. Specifying The
partitionKey and rowKey allows you to filter to specific records.
Table data is passed to the function as a JSON string. De-serialize the message by calling json.loads as shown
in the input example.
For specific usage details, see Example.
Next steps
Write table data from a function
Azure Tables output bindings for Azure Functions
8/2/2022 • 16 minutes to read • Edit Online
Use an Azure Tables output binding to write entities to a table in an Azure Storage or Cosmos DB account.
For information on setup and configuration details, see the overview
NOTE
This output binding only supports creating new entities in a table. If you need to update an existing entity from your
function code, instead use an Azure Tables SDK directly.
Example
A C# function can be created using one of the following C# modes:
In-process class library: compiled C# function that runs in the same process as the Functions runtime.
Isolated process class library: compiled C# function that runs in a process isolated from the runtime. Isolated
process is required to support C# functions running on .NET 5.0.
C# script: used primarily when creating C# functions in the Azure portal.
In-process
Isolated process
C# Script
The following example shows a C# function that uses an HTTP trigger to write a single table row.
[FunctionName("TableOutput")]
[return: Table("MyTable")]
public static MyPoco TableOutput([HttpTrigger] dynamic input, ILogger log)
{
log.LogInformation($"C# http trigger function processed: {input.Text}");
return new MyPoco { PartitionKey = "Http", RowKey = Guid.NewGuid().ToString(), Text = input.Text };
}
}
The following example shows a Java function that uses an HTTP trigger to write a single table row.
public class Person {
private String PartitionKey;
private String RowKey;
private String Name;
@FunctionName("addPerson")
public HttpResponseMessage get(
@HttpTrigger(name = "postPerson", methods = {HttpMethod.POST}, authLevel =
AuthorizationLevel.FUNCTION, route="persons/{partitionKey}/{rowKey}") HttpRequestMessage<Optional<Person>>
request,
@BindingName("partitionKey") String partitionKey,
@BindingName("rowKey") String rowKey,
@TableOutput(name="person", partitionKey="{partitionKey}", rowKey = "{rowKey}",
tableName="%MyTableName%", connection="MyConnectionString") OutputBinding<Person> person,
final ExecutionContext context) {
person.setValue(outPerson);
return request.createResponseBuilder(HttpStatus.OK)
.header("Content-Type", "application/json")
.body(outPerson)
.build();
}
}
The following example shows a Java function that uses an HTTP trigger to write multiple table rows.
public class Person {
private String PartitionKey;
private String RowKey;
private String Name;
@FunctionName("addPersons")
public HttpResponseMessage get(
@HttpTrigger(name = "postPersons", methods = {HttpMethod.POST}, authLevel =
AuthorizationLevel.FUNCTION, route="persons/") HttpRequestMessage<Optional<Person[]>> request,
@TableOutput(name="person", tableName="%MyTableName%", connection="MyConnectionString")
OutputBinding<Person[]> persons,
final ExecutionContext context) {
persons.setValue(request.getBody().get());
return request.createResponseBuilder(HttpStatus.OK)
.header("Content-Type", "application/json")
.body(request.getBody().get())
.build();
}
}
The following example shows a table output binding in a function.json file and a JavaScript function that uses
the binding. The function writes multiple table entities.
Here's the function.json file:
{
"bindings": [
{
"name": "input",
"type": "manualTrigger",
"direction": "in"
},
{
"tableName": "Person",
"connection": "MyStorageConnectionAppSetting",
"name": "tableBinding",
"type": "table",
"direction": "out"
}
],
"disabled": false
}
context.bindings.tableBinding = [];
The following example demonstrates how to write multiple entities to a table from a function.
Binding configuration in function.json:
{
"bindings": [
{
"name": "InputData",
"type": "manualTrigger",
"direction": "in"
},
{
"tableName": "Person",
"connection": "MyStorageConnectionAppSetting",
"name": "TableBinding",
"type": "table",
"direction": "out"
}
],
"disabled": false
}
param($InputData, $TriggerMetadata)
The following example demonstrates how to use the Table storage output binding. Configure the table binding
in the function.json by assigning values to name , tableName , partitionKey , and connection :
{
"scriptFile": "__init__.py",
"bindings": [
{
"name": "message",
"type": "table",
"tableName": "messages",
"partitionKey": "message",
"connection": "AzureWebJobsStorage",
"direction": "out"
},
{
"authLevel": "function",
"type": "httpTrigger",
"direction": "in",
"name": "req",
"methods": [
"get",
"post"
]
},
{
"type": "http",
"direction": "out",
"name": "$return"
}
]
}
The following function generates a unique UUI for the rowKey value and persists the message into Table
storage.
import logging
import uuid
import json
rowKey = str(uuid.uuid4())
data = {
"Name": "Output binding message",
"PartitionKey": "message",
"RowKey": rowKey
}
message.set(json.dumps(data))
Attributes
Both in-process and isolated process C# libraries use attributes to define the function. C# script instead uses a
function.json configuration file.
In-process
Isolated process
C# script
In C# class libraries, the TableAttribute supports the following properties:
The attribute's constructor takes the table name. Use the attribute on an out parameter or on the return value
of the function, as shown in the following example:
[FunctionName("TableOutput")]
[return: Table("MyTable")]
public static MyPoco TableOutput(
[HttpTrigger] dynamic input,
ILogger log)
{
...
}
You can set the Connection property to specify a connection to the table service, as shown in the following
example:
[FunctionName("TableOutput")]
[return: Table("MyTable", Connection = "StorageConnectionAppSetting")]
public static MyPoco TableOutput(
[HttpTrigger] dynamic input,
ILogger log)
{
...
}
While the attribute takes a Connection property, you can also use the StorageAccountAttribute to specify a
storage account connection. You can do this when you need to use a different storage account than other
functions in the library. The constructor takes the name of an app setting that contains a storage connection
string. The attribute can be applied at the parameter, method, or class level. The following example shows class
level and method level:
[StorageAccount("ClassLevelStorageAppSetting")]
public static class AzureFunctions
{
[FunctionName("StorageTrigger")]
[StorageAccount("FunctionLevelStorageAppSetting")]
public static void Run( //...
{
...
}
Annotations
In the Java functions runtime library, use the TableOutput annotation on parameters to write values into your
tables. The attribute supports the following elements:
EL EM EN T DESC RIP T IO N
name The variable name used in function code that represents the
table or entity.
Configuration
The following table explains the binding configuration properties that you set in the function.json file.
name The variable name used in function code that represents the
table or entity. Set to $return to reference the function
return value.
When you're developing locally, add your application settings in the local.settings.json file in the Values
collection.
Connections
The connection property is a reference to environment configuration that specifies how the app should connect
to your table service. It may specify:
The name of an application setting containing a connection string
The name of a shared prefix for multiple application settings, together defining an identity-based connection
If the configured value is both an exact match for a single setting and a prefix match for other settings, the exact
match is used.
Connection string
To obtain a connection string for tables in Azure Storage, follow the steps shown at Manage storage account
access keys. To obtain a connection string for tables in Cosmos DB (when using the Tables API extension), follow
the steps shown at the Table API FAQ.
This connection string should be stored in an application setting with a name matching the value specified by
the connection property of the binding configuration.
If the app setting name begins with "AzureWebJobs", you can specify only the remainder of the name here. For
example, if you set connection to "MyStorage", the Functions runtime looks for an app setting that is named
"AzureWebJobsMyStorage". If you leave connection empty, the Functions runtime uses the default Storage
connection string in the app setting that is named AzureWebJobsStorage .
Identity-based connections
If you are using the Tables API extension, instead of using a connection string with a secret, you can have the app
use an Azure Active Directory identity. This only applies when accessing tables in Azure Storage. To do this, you
would define settings under a common prefix which maps to the connection property in the trigger and
binding configuration.
If you are setting connection to "AzureWebJobsStorage", see Connecting to host storage with an identity. For all
other connections, the extension requires the following properties:
EN VIRO N M EN T VA RIA B L E
P RO P ERT Y T EM P L AT E DESC RIP T IO N EXA M P L E VA L UE
1 <CONNECTION_NAME_PREFIX>__serviceUri can be used as an alias. If both forms are provided, the tableServiceUri
form will be used. The serviceUri form cannot be used when the overall connection configuration is to be used
across blobs, queues, and/or tables.
Additional properties may be set to customize the connection. See Common properties for identity-based
connections.
The serviceUri form cannot be used when the overall connection configuration is to be used across blobs,
queues, and/or tables in Azure Storage. The URI itself can only designate the table service. As an alternative, you
can provide a URI specifically for each service under the same prefix, allowing a single connection to be used.
When hosted in the Azure Functions service, identity-based connections use a managed identity. The system-
assigned identity is used by default, although a user-assigned identity can be specified with the credential and
clientID properties. Note that configuring a user-assigned identity with a resource ID is not supported. When
run in other contexts, such as local development, your developer identity is used instead, although this can be
customized. See Local development with identity-based connections.
Grant permission to the identity
Whatever identity is being used must have permissions to perform the intended actions. You will need to assign
a role in Azure RBAC, using either built-in or custom roles which provide those permissions.
IMPORTANT
Some permissions might be exposed by the target service that are not necessary for all contexts. Where possible, adhere
to the principle of least privilege , granting the identity only required privileges. For example, if the app only needs to
be able to read from a data source, use a role that only has permission to read. It would be inappropriate to assign a role
that also allows writing to that service, as this would be excessive permission for a read operation. Similarly, you would
want to ensure the role assignment is scoped only over the resources that need to be read.
You will need to create a role assignment that provides access to your Azure Storage table service at runtime.
Management roles like Owner are not sufficient. The following table shows built-in roles that are recommended
when using the Table API extension against Azure Storage in normal operation. Your application may require
additional permissions based on the code you write.
1 If your
app is instead connecting to tables in Cosmos DB, using an identity is not supported, and the
connection must use a connection string.
Usage
The usage of the binding depends on the extension package version, and the C# modality used in your function
app, which can be one of the following:
In-process
Isolated process
C# script
An in-process class library is a compiled C# function runs in the same process as the Functions runtime.
Choose a version to see usage details for the mode and version.
You can also bind to CloudTable from the Storage SDK as a method parameter. You can then use that object to
write to the table.
There are two options for outputting a Table storage row from a function by using the TableStorageOutput
annotation:
O P T IO N S DESC RIP T IO N
Return value By applying the annotation to the function itself, the return
value of the function persists as a Table storage row.
Access the output event by using context.bindings.<name> where <name> is the value specified in the name
property of function.json.
To write to table data, use the Push-OutputBinding cmdlet, set the -Name TableBinding parameter and -Value
parameter equal to the row data. See the PowerShell example for more detail.
There are two options for outputting a Table storage row message from a function:
O P T IO N S DESC RIP T IO N
Next steps
Learn more about Azure functions triggers and bindings
Timer trigger for Azure Functions
8/2/2022 • 13 minutes to read • Edit Online
This article explains how to work with timer triggers in Azure Functions. A timer trigger lets you run a function
on a schedule.
This is reference information for Azure Functions developers. If you're new to Azure Functions, start with the
following resources:
Azure Functions developer reference.
Create your first function.
C# developer references:
In-process class library
Isolated process class library
C# script
Create your first function.
JavaScript developer reference.
Create your first function.
Java developer reference.
Create your first function.
Python developer reference
Create your first function.
PowerShell developer reference
Azure Functions triggers and bindings concepts..
Code and test Azure Functions locally..
For information on how to manually run a timer-triggered function, see Manually run a non HTTP-triggered
function.
Support for this binding is automatically provided in all development environments. You don't have to manually
install the package or register the extension.
Source code for the timer extension package is in the azure-webjobs-sdk-extensions GitHub repository.
Example
This example shows a C# function that executes each time the minutes have a value divisible by five. For
example, when the function starts at 18:55:00, the next execution is at 19:00:00. A TimerInfo object is passed to
the function.
A C# function can be created using one of the following C# modes:
In-process class library: compiled C# function that runs in the same process as the Functions runtime.
Isolated process class library: compiled C# function that runs in a process isolated from the runtime. Isolated
process is required to support C# functions running on .NET 5.0.
C# script: used primarily when creating C# functions in the Azure portal.
In-process
Isolated process
C# Script
[FunctionName("TimerTriggerCSharp")]
public static void Run([TimerTrigger("0 */5 * * * *")]TimerInfo myTimer, ILogger log)
{
if (myTimer.IsPastDue)
{
log.LogInformation("Timer is running late!");
}
log.LogInformation($"C# Timer trigger function executed at: {DateTime.Now}");
}
The following example function triggers and executes every five minutes. The @TimerTrigger annotation on the
function defines the schedule using the same string format as CRON expressions.
@FunctionName("keepAlive")
public void keepAlive(
@TimerTrigger(name = "keepAliveTrigger", schedule = "0 */5 * * * *") String timerInfo,
ExecutionContext context
) {
// timeInfo is a JSON string, you can deserialize it to an object using your favorite JSON library
context.getLogger().info("Timer is triggered: " + timerInfo);
}
The following example shows a timer trigger binding in a function.json file and function code that uses the
binding, where an instance representing the timer is passed to the function. The function writes a log indicating
whether this function invocation is due to a missed schedule occurrence.
Here's the binding data in the function.json file:
{
"schedule": "0 */5 * * * *",
"name": "myTimer",
"type": "timerTrigger",
"direction": "in"
}
if (myTimer.isPastDue)
{
context.log('Node is running late!');
}
context.log('Node timer trigger function ran!', timeStamp);
};
#Getthecurrentuniversaltimeinthedefaultstringformat.
$currentUTCtime=(Get-Date).ToUniversalTime()
#The'IsPastDue'propertyis'true'whenthecurrentfunctioninvocationislaterthanscheduled.
if($myTimer.IsPastDue){
Write-Host"PowerShelltimerisrunninglate!"
}
#Writeaninformationlogwiththecurrenttime.
Write-Host"PowerShelltimertriggerfunctionran!TIME:$currentUTCtime"
Here's the Python code, where the object passed into the function is of type azure.functions.TimerRequest object.
import datetime
import logging
if mytimer.past_due:
logging.info('The timer is past due!')
Attributes
Both in-process and isolated process C# libraries use the TimerTriggerAttribute attribute to define the function.
C# script instead uses a function.json configuration file.
In-process
Isolated process
C# script
RunOnStar tup If true , the function is invoked when the runtime starts.
For example, the runtime starts when the function app
wakes up after going idle due to inactivity. when the function
app restarts due to function changes, and when the function
app scales out. Use with caution. RunOnStar tup should
rarely if ever be set to true , especially in production.
AT T RIB UT E P RO P ERT Y DESC RIP T IO N
Annotations
The @TimerTrigger annotation on the function defines the schedule using the same string format as CRON
expressions. The annotation supports the following settings:
dataType
name
schedule
Configuration
The following table explains the binding configuration properties that you set in the function.json file.
name The name of the variable that represents the timer object in
function code.
runOnStar tup If true , the function is invoked when the runtime starts.
For example, the runtime starts when the function app
wakes up after going idle due to inactivity. when the function
app restarts due to function changes, and when the function
app scales out. Use with caution. runOnStar tup should
rarely if ever be set to true , especially in production.
F UN C T IO N . JSO N P RO P ERT Y DESC RIP T IO N
When you're developing locally, add your application settings in the local.settings.json file in the Values
collection.
Cau t i on
Don't set runOnStar tup to true in production. Using this setting makes code execute at highly unpredictable
times. In certain production settings, these extra executions can result in significantly higher costs for apps
hosted in a Consumption plan. For example, with runOnStar tup enabled the trigger is invoked whenever your
function app is scaled. Make sure you fully understand the production behavior of your functions before
enabling runOnStar tup in production.
See the Example section for complete examples.
Usage
When a timer trigger function is invoked, a timer object is passed into the function. The following JSON is an
example representation of the timer object.
{
"Schedule":{
"AdjustForDST": true
},
"ScheduleStatus": {
"Last":"2016-10-04T10:15:00+00:00",
"LastUpdated":"2016-10-04T10:16:00+00:00",
"Next":"2016-10-04T10:20:00+00:00"
},
"IsPastDue":false
}
The isPastDue property is true when the current function invocation is later than scheduled. For example, a
function app restart might cause an invocation to be missed.
NCRONTAB expressions
Azure Functions uses the NCronTab library to interpret NCRONTAB expressions. An NCRONTAB expression is
similar to a CRON expression except that it includes an additional sixth field at the beginning to use for time
precision in seconds:
{second} {minute} {hour} {day} {month} {day-of-week}
To specify months or days you can use numeric values, names, or abbreviations of names:
For days, the numeric values are 0 to 6 where 0 starts with Sunday.
Names are in English. For example: Monday , January .
Names are case-insensitive.
Names can be abbreviated. Three letters is the recommended abbreviation length. For example: Mon , Jan .
NCRONTAB examples
Here are some examples of NCRONTAB expressions you can use for the timer trigger in Azure Functions.
EXA M P L E W H EN T RIGGERED
NOTE
NCRONTAB expression supports both five field and six field format. The sixth field position is a value for seconds which
is placed at the beginning of the expression.
O P ERAT IN G SY ST EM PLAN VA L UE
NOTE
WEBSITE_TIME_ZONE is not currently supported on the Linux Consumption plan.
For example, Eastern Time in the US (represented by Eastern Standard Time (Windows) or America/New_York
(Linux)) currently uses UTC-05:00 during standard time and UTC-04:00 during daylight time. To have a timer
trigger fire at 10:00 AM Eastern Time every day, create an app setting for your function app named
WEBSITE_TIME_ZONE , set the value to Eastern Standard Time (Windows) or America/New_York (Linux), and then
use the following NCRONTAB expression:
"0 0 10 * * *"
When you use WEBSITE_TIME_ZONE the time is adjusted for time changes in the specific timezone, including
daylight saving time and changes in standard time.
TimeSpan
A TimeSpan can be used only for a function app that runs on an App Service Plan.
Unlike a CRON expression, a TimeSpan value specifies the time interval between each function invocation. When
a function completes after running longer than the specified interval, the timer immediately invokes the function
again.
Expressed as a string, the TimeSpan format is hh:mm:ss when hh is less than 24. When the first two digits are
24 or greater, the format is dd:hh:mm . Here are some examples:
EXA M P L E W H EN T RIGGERED
Scale -out
If a function app scales out to multiple instances, only a single instance of a timer-triggered function is run
across all instances. It will not trigger again if there is an outstanding invocation is still running.
Function apps sharing Storage
If you are sharing storage accounts across function apps that are not deployed to app service, you might need to
explicitly assign host ID to each app.
F UN C T IO N S VERSIO N SET T IN G
1.x id in host.json
You can omit the identifying value or manually set each function app's identifying configuration to a different
value.
The timer trigger uses a storage lock to ensure that there is only one timer instance when a function app scales
out to multiple instances. If two function apps share the same identifying configuration and each uses a timer
trigger, only one timer runs.
Retry behavior
Unlike the queue trigger, the timer trigger doesn't retry after a function fails. When a function fails, it isn't called
again until the next time on the schedule.
Manually invoke a timer trigger
The timer trigger for Azure Functions provides an HTTP webhook that can be invoked to manually trigger the
function. This can be extremely useful in the following scenarios.
Integration testing
Slot swaps as part of a smoke test or warmup activity
Initial deployment of a function to immediately populate a cache or lookup table in a database
Please refer to manually run a non HTTP-triggered function for details on how to manually invoke a timer
triggered function.
Troubleshooting
For information about what to do when the timer trigger doesn't work as expected, see Investigating and
reporting issues with timer triggered functions not firing.
Next steps
Go to a quickstart that uses a timer trigger
Learn more about Azure functions triggers and bindings
Twilio binding for Azure Functions
8/2/2022 • 9 minutes to read • Edit Online
This article explains how to send text messages by using Twilio bindings in Azure Functions. Azure Functions
supports output bindings for Twilio.
This is reference information for Azure Functions developers. If you're new to Azure Functions, start with the
following resources:
Azure Functions developer reference.
Create your first function.
C# developer references:
In-process class library
Isolated process class library
C# script
Create your first function.
JavaScript developer reference.
Create your first function.
Java developer reference.
Create your first function.
Python developer reference
Create your first function.
PowerShell developer reference
Azure Functions triggers and bindings concepts..
Code and test Azure Functions locally..
Install extension
The extension NuGet package you install depends on the C# mode you're using in your function app:
In-process
Isolated process
C# script
Functions execute in the same process as the Functions host. To learn more, see Develop C# class library
functions using Azure Functions.
The functionality of the extension varies depending on the extension version:
Functions v2.x+
Functions v1.x
Add the extension to your project by installing the NuGet package, version 3.x.
Install bundle
Starting with Functions version 2.x, the HTTP extension is part of an extension bundle, which is specified in your
host.json project file. To learn more, see extension bundle.
Bundle v2.x
Functions 1.x
This version of the extension should already be available to your function app with extension bundle, version 2.x.
::: zone-end
Example
Unless otherwise noted, these examples are specific to version 2.x and later version of the Functions runtime.
A C# function can be created using one of the following C# modes:
In-process class library: compiled C# function that runs in the same process as the Functions runtime.
Isolated process class library: compiled C# function that runs in a process isolated from the runtime. Isolated
process is required to support C# functions running on .NET 5.0.
C# script: used primarily when creating C# functions in the Azure portal.
In-process
Isolated process
C# Script
The following example shows a C# function that sends a text message when triggered by a queue message.
using Microsoft.Azure.WebJobs;
using Microsoft.Extensions.Logging;
using Newtonsoft.Json.Linq;
using Twilio.Rest.Api.V2010.Account;
using Twilio.Types;
namespace TwilioQueueOutput
{
public static class QueueTwilio
{
[FunctionName("QueueTwilio")]
[return: TwilioSms(AccountSidSetting = "TwilioAccountSid", AuthTokenSetting = "TwilioAuthToken",
From = "+1425XXXXXXX")]
public static CreateMessageOptions Run(
[QueueTrigger("myqueue-items", Connection = "AzureWebJobsStorage")] JObject order,
ILogger log)
{
log.LogInformation($"C# Queue trigger function processed: {order}");
return message;
}
}
}
This example uses the TwilioSms attribute with the method return value. An alternative is to use the attribute
with an out CreateMessageOptions parameter or an ICollector<CreateMessageOptions> or
IAsyncCollector<CreateMessageOptions> parameter.
The following example shows a Twilio output binding in a function.json file and a JavaScript function that uses
the binding.
Here's binding data in the function.json file:
Example function.json:
{
"type": "twilioSms",
"name": "message",
"accountSidSetting": "TwilioAccountSid",
"authTokenSetting": "TwilioAuthToken",
"from": "+1425XXXXXXX",
"direction": "out",
"body": "Azure Functions Testing"
}
// In this example the queue item is a JSON string representing an order that contains the name of a
// customer and a mobile number to send text updates to.
var msg = "Hello " + myQueueItem.name + ", thank you for your order.";
// Even if you want to use a hard coded message in the binding, you must at least
// initialize the message binding.
context.bindings.message = {};
// A dynamic message can be set instead of the body in the output binding. The "To" number
// must be specified in code.
context.bindings.message = {
body : msg,
to : myQueueItem.mobileNumber
};
};
{
"type": "twilioSms",
"name": "twilioMessage",
"accountSidSetting": "TwilioAccountSID",
"authTokenSetting": "TwilioAuthToken",
"from": "+1XXXXXXXXXX",
"direction": "out",
"body": "Azure Functions Testing"
}
You can pass a serialized JSON object to the func.Out parameter to send the SMS message.
import logging
import json
import azure.functions as func
message = req.params.get('message')
to = req.params.get('to')
value = {
"body": message,
"to": to
}
twilioMessage.set(json.dumps(value))
The following example shows how to use the TwilioSmsOutput annotation to send an SMS message. Values for
to , from , and body are required in the attribute definition even if you override them programmatically.
package com.function;
import java.util.*;
import com.microsoft.azure.functions.annotation.*;
import com.microsoft.azure.functions.*;
@FunctionName("TwilioOutput")
public HttpResponseMessage run(
@HttpTrigger(name = "req", methods = { HttpMethod.GET, HttpMethod.POST },
authLevel = AuthorizationLevel.FUNCTION) HttpRequestMessage<Optional<String>> request,
@TwilioSmsOutput(
name = "twilioMessage",
accountSid = "AzureWebJobsTwilioAccountSID",
authToken = "AzureWebJobsTwilioAuthToken",
to = "+1XXXXXXXXXX",
body = "From Azure Functions",
from = "+1XXXXXXXXXX") OutputBinding<String> twilioMessage,
final ExecutionContext context) {
twilioMessage.setValue(body);
Attributes
Both in-process and isolated process C# libraries use attributes to define the output binding. C# script instead
uses a function.json configuration file.
In-process
Isolated process
C# Script
In in-process function apps, use the TwilioSmsAttribute, which supports the following parameters.
AccountSidSetting This value must be set to the name of an app setting that
holds your Twilio Account Sid ( TwilioAccountSid ). When
not set, the default app setting name is
AzureWebJobsTwilioAccountSid .
AuthTokenSetting This value must be set to the name of an app setting that
holds your Twilio authentication token (
TwilioAccountAuthToken ). When not set, the default app
setting name is AzureWebJobsTwilioAuthToken .
To This value is set to the phone number that the SMS text is
sent to.
From This value is set to the phone number that the SMS text is
sent from.
Body This value can be used to hard code the SMS text message if
you don't need to set it dynamically in the code for your
function.
Annotations
The TwilioSmsOutput annotation allows you to declaratively configure the Twilio output binding by providing
the following configuration values:
+
Place the TwilioSmsOutput annotation on an OutputBinding<T> parameter, where T may be any native Java
type such as int , String , byte[] , or a POJO type.
Configuration
The following table explains the binding configuration properties that you set in the function.json file, which
differs by runtime version:
Functions v2.x+
Functions 1.x
name Variable name used in function code for the Twilio SMS text
message.
F UN C T IO N . JSO N P RO P ERT Y DESC RIP T IO N
accountSidSetting This value must be set to the name of an app setting that
holds your Twilio Account Sid ( TwilioAccountSid ). When
not set, the default app setting name is
AzureWebJobsTwilioAccountSid .
authTokenSetting This value must be set to the name of an app setting that
holds your Twilio authentication token (
TwilioAccountAuthToken ). When not set, the default app
setting name is AzureWebJobsTwilioAuthToken .
from This value is set to the phone number that the SMS text is
sent from.
body This value can be used to hard code the SMS text message if
you don't need to set it dynamically in the code for your
function.
Next steps
Learn more about Azure functions triggers and bindings
Azure Functions warmup trigger
8/2/2022 • 5 minutes to read • Edit Online
This article explains how to work with the warmup trigger in Azure Functions. A warmup trigger is invoked
when an instance is added to scale a running function app. The warmup trigger lets you define a function that's
run when a new instance of your function app is started. You can use a warmup trigger to pre-load custom
dependencies during the pre-warming process so your functions are ready to start processing requests
immediately. Some actions for a warmup trigger might include opening connections, loading dependencies, or
running any other custom logic before your app begins receiving traffic. To learn more, see pre-warmed
instances.
The following considerations apply when using a warmup trigger:
The warmup trigger isn't available to apps running on the Consumption plan.
The warmup trigger isn't supported on version 1.x of the Functions runtime.
Support for the warmup trigger is provided by default in all development environments. You don't have to
manually install the package or register the extension.
There can be only one warmup trigger function per function app, and it can't be invoked after the instance is
already running.
The warmup trigger is only called during scale-out operations, not during restarts or other non-scale
startups. Make sure your logic can load all required dependencies without relying on the warmup trigger.
Lazy loading is a good pattern to achieve this goal.
Dependencies created by warmup trigger should be shared with other functions in your app. To learn more,
see Static clients.
Example
A C# function can be created using one of the following C# modes:
In-process class library: compiled C# function that runs in the same process as the Functions runtime.
Isolated process class library: compiled C# function that runs in a process isolated from the runtime. Isolated
process is required to support C# functions running on .NET 5.0.
C# script: used primarily when creating C# functions in the Azure portal.
In-process
Isolated process
C# Script
The following example shows a C# function that runs on each new instance when it's added to your app.
using Microsoft.Azure.WebJobs;
using Microsoft.Extensions.Logging;
namespace WarmupSample
{
The following example shows a warmup trigger that runs when each new instance is added to your app.
@FunctionName("Warmup")
public void run( ExecutionContext context) {
context.getLogger().info("Function App instance is warm ");
}
The following example shows a warmup trigger in a function.json file and a JavaScript function that runs on
each new instance when it's added to your app.
Here's the function.json file:
{
"bindings": [
{
"type": "warmupTrigger",
"direction": "in",
"name": "warmupContext"
}
]
}
{
"bindings": [
{
"type": "warmupTrigger",
"direction": "in",
"name": "warmupContext"
}
]
}
import logging
import azure.functions as func
Attributes
Both in-process and isolated process C# libraries use the WarmupTrigger attribute to define the function. C#
script instead uses a function.json configuration file.
In-process
Isolated process
C# script
Use the WarmupTrigger attribute to define the function. This attribute has no parameters.
Annotations
Annotations aren't required by a warmup trigger. Just use a name of warmup (case-insensitive) for the
FunctionName annotation.
Configuration
The following table explains the binding configuration properties that you set in the function.json file.
Usage
The following considerations apply to using a warmup function in C#:
In-process
Isolated process
C# script
Your function must be named warmup (case-insensitive) using the FunctionName attribute.
A return value attribute isn't required.
You must be using version 3.0.5 of the Microsoft.Azure.WebJobs.Extensions package, or a later version.
You can pass a WarmupContext instance to the function.
Your function must be named warmup (case-insensitive) using the FunctionName annotation.
The function type in function.json must be set to warmupTrigger .
Next steps
Learn more about Azure functions triggers and bindings
Learn more about Premium plan
AZFW0001: Invalid binding attributes
8/2/2022 • 2 minutes to read • Edit Online
This rule is triggered when invalid WebJobs binding attributes are used in the function definition.
VA L UE
Rule ID AZFW0001
Categor y [Usage]
Severity Error
Rule description
The Azure Functions .NET Worker uses a different input and output binding model, which is incompatible with
the WebJobs binding model used by the Azure Functions in-process model.
In order to support the existing bindings and triggers, a new set of packages, compatible with the new binding
model, have been introduced, those packages follow a naming convention that makes it easy to find a suitable
replacement, simply by changing the prefix Microsoft.Azure.WebJobs.Extensions.* for
Microsoft.Azure.Functions.Worker.Extensions.* . For example:
This rule is triggered when the void return type is used in an async function definition.
VA L UE
Rule ID AZF0001
Categor y [Usage]
Severity Error
Rule description
Defining async functions with a void return type make it impossible for the Functions runtime to track
invocation completion or catch and handle exceptions thrown by the function method.
Refer to this article for general async void information: https://msdn.microsoft.com/magazine/jj991977.aspx
This rule is triggered when an HttpClient is instantiated within a [FunctionName] -decorated method.
VA L UE
Rule ID AZF0002
Categor y [Reliability]
Severity Warning
Rule description
Simple usage of HttpClient to make HTTP requests presents several issues, including vulnerability to socket
exhaustion. In a Function app, calling the HttpClient constructor in the body of a function method will create a
new instance with every function invocation, amplifying these issues. For apps running on a Consumption
hosting plan, inefficient HttpClient usage can exhaust the plan's outbound connection limits.
The recommended best practice is to use an [ IHttpClientFactory ] with dependency injection or a single static
HttpClient instance, depending on the nature of your application.
Next steps
For more information about connection management best practices in Azure Functions, see Manage
connections in Azure Functions.
For more information about HttpClient behavior issues and management, see Use IHttpClientFactory to
implement resilient HTTP requests
host.json reference for Azure Functions 2.x and later
8/2/2022 • 14 minutes to read • Edit Online
The host.json metadata file contains configuration options that affect all functions in a function app instance.
This article lists the settings that are available starting with version 2.x of the Azure Functions runtime.
NOTE
This article is for Azure Functions 2.x and later versions. For a reference of host.json in Functions 1.x, see host.json
reference for Azure Functions 1.x.
Other function app configuration options are managed depending on where the function app runs:
Deployed to Azure : in your application settings
On your local computer : in the local.settings.json file.
Configurations in host.json related to bindings are applied equally to each function in the function app.
You can also override or apply settings per environment using application settings.
{
"version": "2.0",
"aggregator": {
"batchSize": 1000,
"flushTimeout": "00:00:30"
},
"extensions": {
"blobs": {},
"cosmosDb": {},
"durableTask": {},
"eventHubs": {},
"http": {},
"queues": {},
"sendGrid": {},
"serviceBus": {}
},
"extensionBundle": {
"id": "Microsoft.Azure.Functions.ExtensionBundle",
"version": "[1.*, 2.0.0)"
},
"functions": [ "QueueProcessor", "GitHubWebHook" ],
"functionTimeout": "00:05:00",
"healthMonitor": {
"enabled": true,
"healthCheckInterval": "00:00:10",
"healthCheckWindow": "00:02:00",
"healthCheckThreshold": 6,
"counterThreshold": 0.80
},
"logging": {
"fileLoggingMode": "debugOnly",
"logLevel": {
"Function.MyFunction": "Information",
"Function.MyFunction": "Information",
"default": "None"
},
"applicationInsights": {
"samplingSettings": {
"isEnabled": true,
"maxTelemetryItemsPerSecond" : 20,
"evaluationInterval": "01:00:00",
"initialSamplingPercentage": 100.0,
"samplingPercentageIncreaseTimeout" : "00:00:01",
"samplingPercentageDecreaseTimeout" : "00:00:01",
"minSamplingPercentage": 0.1,
"maxSamplingPercentage": 100.0,
"movingAverageRatio": 1.0,
"excludedTypes" : "Dependency;Event",
"includedTypes" : "PageView;Trace"
},
"enableLiveMetrics": true,
"enableDependencyTracking": true,
"enablePerformanceCountersCollection": true,
"httpAutoCollectionOptions": {
"enableHttpTriggerExtendedInfoCollection": true,
"enableW3CDistributedTracing": true,
"enableResponseHeaderInjection": true
},
"snapshotConfiguration": {
"agentEndpoint": null,
"captureSnapshotMemoryWeight": 0.5,
"failedRequestLimit": 3,
"handleUntrackedExceptions": true,
"isEnabled": true,
"isEnabledInDeveloperMode": false,
"isEnabledWhenProfiling": true,
"isExceptionSnappointsEnabled": false,
"isLowPrioritySnapshotUploader": true,
"maximumCollectionPlanSize": 50,
"maximumSnapshotsRequired": 3,
"problemCounterResetInterval": "24:00:00",
"provideAnonymousTelemetry": true,
"reconnectInterval": "00:15:00",
"shadowCopyFolder": null,
"shareUploaderProcess": true,
"snapshotInLowPriorityThread": true,
"snapshotsPerDayLimit": 30,
"snapshotsPerTenMinutesLimit": 1,
"tempFolder": null,
"thresholdForSnapshotting": 1,
"uploaderProxy": null
}
}
},
"managedDependency": {
"enabled": true
},
"singleton": {
"lockPeriod": "00:00:15",
"listenerLockPeriod": "00:01:00",
"listenerLockRecoveryPollingInterval": "00:01:00",
"lockAcquisitionTimeout": "00:01:00",
"lockAcquisitionPollingInterval": "00:00:03"
},
"watchDirectories": [ "Shared", "Test" ],
"watchFiles": [ "myFile.txt" ]
}
The following sections of this article explain each top-level property. All are optional unless otherwise indicated.
aggregator
Specifies how many function invocations are aggregated when calculating metrics for Application Insights.
{
"aggregator": {
"batchSize": 1000,
"flushTimeout": "00:00:30"
}
}
Function invocations are aggregated when the first of the two limits are reached.
applicationInsights
This setting is a child of logging.
Controls options for Application Insights, including sampling options.
For the complete JSON structure, see the earlier example host.json file.
NOTE
Log sampling may cause some executions to not show up in the Application Insights monitor blade. To avoid log
sampling, add excludedTypes: "Request" to the samplingSettings value.
applicationInsights.httpAutoCollectionOptions
P RO P ERT Y DEFA ULT DESC RIP T IO N
applicationInsights.snapshotConfiguration
For more information on snapshots, see Debug snapshots on exceptions in .NET apps and Troubleshoot
problems enabling Application Insights Snapshot Debugger or viewing snapshots.
P RO P ERT Y DEFA ULT DESC RIP T IO N
blobs
Configuration settings can be found in Storage blob triggers and bindings.
console
This setting is a child of logging. It controls the console logging when not in debugging mode.
{
"logging": {
...
"console": {
"isEnabled": false,
"DisableColors": true
},
...
}
}
cosmosDb
Configuration setting can be found in Cosmos DB triggers and bindings.
customHandler
Configuration settings for a custom handler. For more information, see Azure Functions custom handlers.
"customHandler": {
"description": {
"defaultExecutablePath": "server",
"workingDirectory": "handler",
"arguments": [ "--port", "%FUNCTIONS_CUSTOMHANDLER_PORT%" ]
},
"enableForwardingHttpRequest": false
}
durableTask
Configuration setting can be found in bindings for Durable Functions.
eventHub
Configuration settings can be found in Event Hub triggers and bindings.
extensions
Property that returns an object that contains all of the binding-specific settings, such as http and eventHub.
extensionBundle
Extension bundles let you add a compatible set of Functions binding extensions to your function app. To learn
more, see Extension bundles for local development.
{
"version": "2.0",
"extensionBundle": {
"id": "Microsoft.Azure.Functions.ExtensionBundle",
"version": "[3.3.0, 4.0.0)"
}
}
NOTE
Version 3.x of the extension bundle currently does not include the Table Storage bindings. If your app requires Table
Storage, you will need to continue using the 2.x version for now.
functions
A list of functions that the job host runs. An empty array means run all functions. Intended for use only when
running locally. In function apps in Azure, you should instead follow the steps in How to disable functions in
Azure Functions to disable specific functions rather than using this setting.
{
"functions": [ "QueueProcessor", "GitHubWebHook" ]
}
functionTimeout
Indicates the timeout duration for all functions. It follows the timespan string format.
Consumption 5 10
Premium1 30 -1 (unbounded)2
1 Premium plan execution is only guaranteed for 60 minutes, but technically unbounded.
2 A value of -1 indicates unbounded execution, but keeping a fixed upper bound is recommended.
{
"functionTimeout": "00:05:00"
}
healthMonitor
Configuration settings for Host health monitor.
{
"healthMonitor": {
"enabled": true,
"healthCheckInterval": "00:00:10",
"healthCheckWindow": "00:02:00",
"healthCheckThreshold": 6,
"counterThreshold": 0.80
}
}
http
Configuration settings can be found in http triggers and bindings.
logging
Controls the logging behaviors of the function app, including Application Insights.
"logging": {
"fileLoggingMode": "debugOnly",
"logLevel": {
"Function.MyFunction": "Information",
"default": "None"
},
"console": {
...
},
"applicationInsights": {
...
}
}
managedDependency
Managed dependency is a feature that is currently only supported with PowerShell based functions. It enables
dependencies to be automatically managed by the service. When the enabled property is set to true , the
requirements.psd1 file is processed. Dependencies are updated when any minor versions are released. For more
information, see Managed dependency in the PowerShell article.
{
"managedDependency": {
"enabled": true
}
}
queues
Configuration settings can be found in Storage queue triggers and bindings.
sendGrid
Configuration setting can be found in SendGrid triggers and bindings.
serviceBus
Configuration setting can be found in Service Bus triggers and bindings.
singleton
Configuration settings for Singleton lock behavior. For more information, see GitHub issue about singleton
support.
{
"singleton": {
"lockPeriod": "00:00:15",
"listenerLockPeriod": "00:01:00",
"listenerLockRecoveryPollingInterval": "00:01:00",
"lockAcquisitionTimeout": "00:01:00",
"lockAcquisitionPollingInterval": "00:00:03"
}
}
version
This value indicates the schema version of host.json. The version string "version": "2.0" is required for a
function app that targets the v2 runtime, or a later version. There are no host.json schema changes between v2
and v3.
watchDirectories
A set of shared code directories that should be monitored for changes. Ensures that when code in these
directories is changed, the changes are picked up by your functions.
{
"watchDirectories": [ "Shared" ]
}
watchFiles
An array of one or more names of files that are monitored for changes that require your app to restart. This
guarantees that when code in these files are changed, the updates are picked up by your functions.
{
"watchFiles": [ "myFile.txt" ]
}
{
"IsEncrypted": false,
"Values": {
"AzureWebJobsStorage": "{storage-account-connection-string}",
"FUNCTIONS_WORKER_RUNTIME": "{language-runtime}",
"AzureFunctionsJobHost__logging__applicationInsights__samplingSettings__isEnabled":"false"
}
}
Next steps
Learn how to update the host.json file
See global settings in environment variables
host.json reference for Azure Functions 1.x
8/2/2022 • 13 minutes to read • Edit Online
The host.json metadata file contains configuration options that affect all functions in a function app instance.
This article lists the settings that are available for the version 1.x runtime. The JSON schema is at
http://json.schemastore.org/host.
NOTE
This article is for Azure Functions 1.x. For a reference of host.json in Functions 2.x and later, see host.json reference for
Azure Functions 2.x.
Other function app configuration options are managed in your app settings.
Some host.json settings are only used when running locally in the local.settings.json file.
{
"aggregator": {
"batchSize": 1000,
"flushTimeout": "00:00:30"
},
"applicationInsights": {
"sampling": {
"isEnabled": true,
"maxTelemetryItemsPerSecond" : 5
}
},
"documentDB": {
"connectionMode": "Gateway",
"protocol": "Https",
"leaseOptions": {
"leasePrefix": "prefix"
}
},
"eventHub": {
"maxBatchSize": 64,
"prefetchCount": 256,
"batchCheckpointFrequency": 1
},
"functions": [ "QueueProcessor", "GitHubWebHook" ],
"functionTimeout": "00:05:00",
"healthMonitor": {
"enabled": true,
"healthCheckInterval": "00:00:10",
"healthCheckWindow": "00:02:00",
"healthCheckThreshold": 6,
"counterThreshold": 0.80
},
"http": {
"routePrefix": "api",
"maxOutstandingRequests": 20,
"maxConcurrentRequests": 10,
"dynamicThrottlesEnabled": false
},
"id": "9f4ea53c5136457d883d685e57164f08",
"id": "9f4ea53c5136457d883d685e57164f08",
"logger": {
"categoryFilter": {
"defaultLevel": "Information",
"categoryLevels": {
"Host": "Error",
"Function": "Error",
"Host.Aggregator": "Information"
}
}
},
"queues": {
"maxPollingInterval": 2000,
"visibilityTimeout" : "00:00:30",
"batchSize": 16,
"maxDequeueCount": 5,
"newBatchThreshold": 8
},
"sendGrid": {
"from": "Contoso Group <admin@contoso.com>"
},
"serviceBus": {
"maxConcurrentCalls": 16,
"prefetchCount": 100,
"autoRenewTimeout": "00:05:00",
"autoComplete": true
},
"singleton": {
"lockPeriod": "00:00:15",
"listenerLockPeriod": "00:01:00",
"listenerLockRecoveryPollingInterval": "00:01:00",
"lockAcquisitionTimeout": "00:01:00",
"lockAcquisitionPollingInterval": "00:00:03"
},
"tracing": {
"consoleLevel": "verbose",
"fileLoggingMode": "debugOnly"
},
"watchDirectories": [ "Shared" ],
}
The following sections of this article explain each top-level property. All are optional unless otherwise indicated.
aggregator
Specifies how many function invocations are aggregated when calculating metrics for Application Insights.
{
"aggregator": {
"batchSize": 1000,
"flushTimeout": "00:00:30"
}
}
Function invocations are aggregated when the first of the two limits are reached.
applicationInsights
Controls the sampling feature in Application Insights.
{
"applicationInsights": {
"sampling": {
"isEnabled": true,
"maxTelemetryItemsPerSecond" : 5
}
}
}
DocumentDB
Configuration settings for the Azure Cosmos DB trigger and bindings.
{
"documentDB": {
"connectionMode": "Gateway",
"protocol": "Https",
"leaseOptions": {
"leasePrefix": "prefix1"
}
}
}
durableTask
Configuration settings for Durable Functions.
NOTE
All major versions of Durable Functions are supported on all versions of the Azure Functions runtime. However, the
schema of the host.json configuration is slightly different depending on the version of the Azure Functions runtime and
the Durable Functions extension version you use. The following examples are for use with Azure Functions 2.0 and 3.0. In
both examples, if you're using Azure Functions 1.0, the available settings are the same, but the "durableTask" section of
the host.json should go in the root of the host.json configuration instead of as a field under "extensions".
{
"extensions": {
"durableTask": {
"hubName": "MyTaskHub",
"storageProvider": {
"connectionStringName": "AzureWebJobsStorage",
"controlQueueBatchSize": 32,
"controlQueueBufferThreshold": 256,
"controlQueueVisibilityTimeout": "00:05:00",
"maxQueuePollingInterval": "00:00:30",
"partitionCount": 4,
"trackingStoreConnectionStringName": "TrackingStorage",
"trackingStoreNamePrefix": "DurableTask",
"useLegacyPartitionManagement": true,
"workItemQueueVisibilityTimeout": "00:05:00",
},
"tracing": {
"traceInputsAndOutputs": false,
"traceReplayEvents": false,
},
"notifications": {
"eventGrid": {
"topicEndpoint": "https://topic_name.westus2-1.eventgrid.azure.net/api/events",
"keySettingName": "EventGridKey",
"publishRetryCount": 3,
"publishRetryInterval": "00:00:30",
"publishEventTypes": [
"Started",
"Pending",
"Failed",
"Terminated"
]
}
},
"maxConcurrentActivityFunctions": 10,
"maxConcurrentOrchestratorFunctions": 10,
"extendedSessionsEnabled": false,
"extendedSessionIdleTimeoutInSeconds": 30,
"useAppLease": true,
"useGracefulShutdown": false,
"maxEntityOperationBatchSize": 50
}
}
}
Task hub names must start with a letter and consist of only letters and numbers. If not specified, the default task
hub name for a function app is DurableFunctionsHub . For more information, see Task hubs.
P RO P ERT Y DEFA ULT DESC RIP T IO N
connectionName (2.7.0 and later) AzureWebJobsStorage The name of an app setting or setting
connectionStringName (2.x) collection that specifies how to connect
azureStorageConnectionStringName to the underlying Azure Storage
(1.x) resources. When a single app setting is
provided, it should be an Azure
Storage connection string.
P RO P ERT Y DEFA ULT DESC RIP T IO N
Many of these settings are for optimizing performance. For more information, see Performance and scale.
eventHub
Configuration settings for Event Hub triggers and bindings.
functions
A list of functions that the job host runs. An empty array means run all functions. Intended for use only when
running locally. In function apps in Azure, you should instead follow the steps in How to disable functions in
Azure Functions to disable specific functions rather than using this setting.
{
"functions": [ "QueueProcessor", "GitHubWebHook" ]
}
functionTimeout
Indicates the timeout duration for all functions. In a serverless Consumption plan, the valid range is from 1
second to 10 minutes, and the default value is 5 minutes. In an App Service plan, there is no overall limit and the
default is null, which indicates no timeout.
{
"functionTimeout": "00:05:00"
}
healthMonitor
Configuration settings for Host health monitor.
{
"healthMonitor": {
"enabled": true,
"healthCheckInterval": "00:00:10",
"healthCheckWindow": "00:02:00",
"healthCheckThreshold": 6,
"counterThreshold": 0.80
}
}
http
Configuration settings for http triggers and bindings.
{
"http": {
"routePrefix": "api",
"maxOutstandingRequests": 200,
"maxConcurrentRequests": 100,
"dynamicThrottlesEnabled": true
}
}
P RO P ERT Y DEFA ULT DESC RIP T IO N
id
The unique ID for a job host. Can be a lower case GUID with dashes removed. Required when running locally.
When running in Azure, we recommend that you not set an ID value. An ID is generated automatically in Azure
when id is omitted.
If you share a Storage account across multiple function apps, make sure that each function app has a different
id . You can omit the id property or manually set each function app's id to a different value. The timer
trigger uses a storage lock to ensure that there will be only one timer instance when a function app scales out to
multiple instances. If two function apps share the same id and each uses a timer trigger, only one timer will
run.
{
"id": "9f4ea53c5136457d883d685e57164f08"
}
logger
Controls filtering for logs written by an ILogger object or by context.log.
{
"logger": {
"categoryFilter": {
"defaultLevel": "Information",
"categoryLevels": {
"Host": "Error",
"Function": "Error",
"Host.Aggregator": "Information"
}
}
}
}
queues
Configuration settings for Storage queue triggers and bindings.
{
"queues": {
"maxPollingInterval": 2000,
"visibilityTimeout" : "00:00:30",
"batchSize": 16,
"maxDequeueCount": 5,
"newBatchThreshold": 8
}
}
P RO P ERT Y DEFA ULT DESC RIP T IO N
SendGrid
Configuration setting for the SendGrind output binding
{
"sendGrid": {
"from": "Contoso Group <admin@contoso.com>"
}
}
serviceBus
Configuration setting for Service Bus triggers and bindings.
{
"serviceBus": {
"maxConcurrentCalls": 16,
"prefetchCount": 100,
"autoRenewTimeout": "00:05:00",
"autoComplete": true
}
}
singleton
Configuration settings for Singleton lock behavior. For more information, see GitHub issue about singleton
support.
{
"singleton": {
"lockPeriod": "00:00:15",
"listenerLockPeriod": "00:01:00",
"listenerLockRecoveryPollingInterval": "00:01:00",
"lockAcquisitionTimeout": "00:01:00",
"lockAcquisitionPollingInterval": "00:00:03"
}
}
tracing
Version 1.x
Configuration settings for logs that you create by using a TraceWriter object. To learn more, see [C# Logging].
{
"tracing": {
"consoleLevel": "verbose",
"fileLoggingMode": "debugOnly"
}
}
watchDirectories
A set of shared code directories that should be monitored for changes. Ensures that when code in these
directories is changed, the changes are picked up by your functions.
{
"watchDirectories": [ "Shared" ]
}
Next steps
Learn how to update the host.json file
Persist settings in environment variables
Monitoring Azure Functions data reference
8/2/2022 • 2 minutes to read • Edit Online
This reference applies to the use of Azure Monitor for monitoring function apps hosted in Azure Functions. See
Monitoring function app with Azure Monitor for details on using Azure Monitor to collect and analyze
monitoring data from your function apps.
See Monitor Azure Functions for details on using Application Insights to collect and analyze log data from
individual functions in your function app.
Metrics
This section lists all the automatically collected platform metrics collected for Azure Functions.
Azure Functions specific metrics
There are two metrics specific to Functions that are of interest:
These metrics are used specifically when estimating Consumption plan costs.
General App Service metrics
Aside from Azure Functions specific metrics, the App Service platform implements more metrics, which you can
use to monitor function apps. For the complete list, see metrics available to App Service apps and Monitoring
App Service data reference.
Metric Dimensions
For more information on what metric dimensions are, see Multi-dimensional metrics.
Azure Functions doesn't have any metrics that contain dimensions.
Resource logs
This section lists the types of resource logs you can collect for your function apps.
LO G T Y P E DESC RIP T IO N
Activity log
The following table lists the operations related to Azure Functions that may be created in the Activity log.
You may also find logged operations that relate to the underlying App Service behaviors. For a more complete
list, see Resource Provider Operations.
For more information on the schema of Activity Log entries, see Activity Log schema.
See Also
See Monitoring Azure Functions for a description of monitoring Azure Functions.
See Monitoring Azure resources with Azure Monitor for details on monitoring Azure resources.
Azure Functions pricing
8/2/2022 • 2 minutes to read • Edit Online
Azure Functions has three kinds of pricing plans. Choose the one that best fits your needs:
Consumption plan : Azure provides all of the necessary computational resources. You don't have to
worry about resource management, and only pay for the time that your code runs.
Premium plan : You specify a number of pre-warmed instances that are always online and ready to
immediately respond. When your function runs, Azure provides any additional computational resources
that are needed. You pay for the pre-warmed instances running continuously and any additional
instances you use as Azure scales your app in and out.
App Ser vice plan : Run your functions just like your web apps. If you use App Service for your other
applications, your functions can run on the same plan at no additional cost.
For more information about hosting plans, see Azure Functions hosting plan comparison. Full pricing details are
available on the Functions Pricing page.
Language runtime support policy
8/2/2022 • 2 minutes to read • Edit Online
Retirement process
Azure Functions runtime is built around various components, including operating systems, the Azure Functions
host, and language-specific workers. To maintain full support coverages for function apps, Azure Functions uses
a phased reduction in support as programming language versions reach their end-of-life dates. For most
language versions, the retirement date coincides with the community end-of-life date.
Notification phase
We'll send notification emails to function app users about upcoming language version retirements. The
notifications will be at least one year before the date of retirement. Upon the notification, you should prepare to
upgrade the language version that your functions apps use to a supported version.
Retirement phase
Starting on the end-of-life date for a language version, you can no longer create new function apps targeting
that language version.
After the language end-of-life date, function apps that use retired language versions won't be eligible for new
features, security patches, and performance optimizations. However, these function apps will continue to run on
the platform.
IMPORTANT
You're highly encouraged to upgrade the language version of your affected function apps to a supported version. If you're
running functions apps using an unsupported language version, you'll be required to upgrade before receiving support
for the function apps.
Node link
PowerShell link
Python link
Next steps
To learn more about how to upgrade your functions apps language versions, see the following resources:
Currently supported language versions
Azure Functions hosting options
8/2/2022 • 9 minutes to read • Edit Online
When you create a function app in Azure, you must choose a hosting plan for your app. There are three basic
hosting plans available for Azure Functions: Consumption plan, Premium plan, and Dedicated (App Service)
plan. All hosting plans are generally available (GA) on both Linux and Windows virtual machines.
The hosting plan you choose dictates the following behaviors:
How your function app is scaled.
The resources available to each function app instance.
Support for advanced functionality, such as Azure Virtual Network connectivity.
This article provides a detailed comparison between the various hosting plans, along with Kubernetes-based
hosting.
NOTE
If you choose to host your functions in a Kubernetes cluster, consider using an Azure Arc-enabled Kubernetes cluster.
Hosting on an Azure Arc-enabled Kubernetes cluster is currently in preview. To learn more, see App Service, Functions,
and Logic Apps on Azure Arc.
Overview of plans
The following is a summary of the benefits of the three main hosting plans for Functions:
PLAN B EN EF IT S
Consumption plan Scale automatically and only pay for compute resources
when your functions are running.
Dedicated plan Run your functions within an App Service plan at regular
App Service plan rates.
The comparison tables in this article also include the following hosting options, which provide the highest
amount of control and isolation in which to run your function apps.
H O ST IN G O P T IO N DETA IL S
The remaining tables in this article compare the plans on various features and behaviors. For a cost comparison
between dynamic hosting plans (Consumption and Premium), see the Azure Functions pricing page. For pricing
of the various Dedicated plan options, see the App Service pricing page.
Operating system/runtime
The following table shows operating system and language support for the hosting plans.
L IN UX 1,2 L IN UX 1,2,3
C O DE- O N LY W IN DO W S C O DE- O N LY DO C K ER C O N TA IN ER
Premium plan C# C# C#
JavaScript JavaScript JavaScript
Java Java Java
Python PowerShell Core PowerShell Core
PowerShell Core TypeScript Python
TypeScript TypeScript
Dedicated plan C# C# C#
JavaScript JavaScript JavaScript
Java Java Java
Python PowerShell Core PowerShell Core
TypeScript TypeScript Python
TypeScript
ASE C# C# C#
JavaScript JavaScript JavaScript
Java Java Java
Python PowerShell Core PowerShell Core
TypeScript TypeScript Python
TypeScript
1 Linux is the only supported operating system for the Python runtime stack.
2 PowerShell support on Linux is currently in preview.
3 Linux is the only supported operating system for Docker containers.
Consumption plan 5 10
1 Regardless of the function app timeout setting, 230 seconds is the maximum amount of time that an HTTP
triggered function can take to respond to a request. This is because of the default idle timeout of Azure Load
Balancer. For longer processing times, consider using the Durable Functions async pattern or defer the actual
work and return an immediate response.
2 The default timeout for version 1.x of the Functions runtime is unlimited.
Scale
The following table compares the scaling behaviors of the various hosting plans.
Maximum instances are given on a per-function app (Consumption) or per-plan (Premium/Dedicated) basis,
unless otherwise indicated.
PLAN SC A L E O UT M A X # IN STA N C ES
1 During scale-out, there's currently a limit of 500 instances per subscription per hour for Linux apps on a
Consumption plan.
2 In some regions, Linux apps on a Premium plan can scale to 40 instances. For more information, see the
Consumption plan Apps may scale to zero when idle, meaning some requests
may have additional latency at startup. The consumption
plan does have some optimizations to help decrease cold
start time, including pulling from pre-warmed placeholder
functions that already have the function host and language
processes running.
Dedicated plan When running in a Dedicated plan, the Functions host can
run continuously, which means that cold start isn't really an
issue.
Service limits
C O N SUM P T IO N DEDIC AT ED
RESO URC E PLAN P REM IUM P L A N PLAN A SE K UB ERN ET ES
App Service 100 per region 100 per resource 100 per resource - -
plans group group
Custom domain unbounded SNI unbounded SNI unbounded SNI unbounded SNI n/a
SSL support SSL connection SSL and 1 IP SSL SSL and 1 IP SSL SSL and 1 IP SSL
included connections connections connections
included included included
1 By default, the timeout for the Functions 1.x runtime in an App Service plan is unbounded.
2 Requires the App Service plan be set to Always On. Pay at standard rates.
3 These limits are set in the host.
4 The actual number of function apps that you can host depends on the activity of the apps, the size of the
machine instances, and the corresponding resource utilization.
5 The storage limit is the total content size in temporary storage across all apps in the same App Service plan.
apps in a Premium plan or an App Service plan, you can map a custom domain using either a CNAME or an A
record.
7 Guaranteed for up to 60 minutes.
8 Workers are roles that host customer apps. Workers are available in three fixed sizes: One vCPU/3.5 GB RAM;
Networking features
C O N SUM P T IO N DEDIC AT ED
F EAT URE PLAN P REM IUM P L A N PLAN A SE K UB ERN ET ES
Billing
PLAN DETA IL S
Consumption plan Pay only for the time your functions run. Billing is based on
number of executions, execution time, and memory used.
Premium plan Premium plan is based on the number of core seconds and
memory used across needed and pre-warmed instances. At
least one instance per plan must always be kept warm. This
plan provides the most predictable pricing.
Dedicated plan You pay the same for function apps in an App Service Plan
as you would for other App Service resources, like web apps.
App Ser vice Environment (ASE) There's a flat monthly rate for an ASE that pays for the
infrastructure and doesn't change with the size of the ASE.
There's also a cost per App Service plan vCPU. All apps
hosted in an ASE are in the Isolated pricing SKU.
Next steps
Deployment technologies in Azure Functions
Azure Functions developer guide