ℹ️ Skipped - page is already crawled
| Filter | Status | Condition | Details |
|---|---|---|---|
| HTTP status | PASS | download_http_code = 200 | HTTP 200 |
| Age cutoff | PASS | download_stamp > now() - 6 MONTH | 0.4 months ago |
| History drop | PASS | isNull(history_drop_reason) | No drop reason |
| Spam/ban | PASS | fh_dont_index != 1 AND ml_spam_score = 0 | ml_spam_score=0 |
| Canonical | PASS | meta_canonical IS NULL OR = '' OR = src_unparsed | Not set |
| Property | Value |
|---|---|
| URL | https://flower.ai/docs/framework/main/zh_Hans/tutorial-quickstart-tensorflow.html |
| Last Crawled | 2026-03-24 18:08:57 (13 days ago) |
| First Indexed | 2024-02-18 15:18:07 (2 years ago) |
| HTTP Status Code | 200 |
| Meta Title | 快速入门 TensorFlow - Flower Framework |
| Meta Description | Learn how to train a Convolutional Neural Network on CIFAR-10 using federated learning with Flower and TensorFlow in this step-by-step tutorial. |
| Meta Canonical | null |
| Boilerpipe Text | In this tutorial we will learn how to train a Convolutional Neural Network on CIFAR-10
using the Flower framework and TensorFlow. First of all, it is recommended to create a
virtual environment and run everything within a
virtualenv
.
Let's use
flwr
new
to create a complete Flower+TensorFlow project. It will generate
all the files needed to run a federation of 10 nodes using
FedAvg
. By default,
the generated app uses a local simulation profile that
flwr
run
submits to a managed
local SuperLink, which then executes the run with the Flower Simulation Runtime. The
dataset will be partitioned using Flower Dataset's
IidPartitioner
.
Now that we have a rough idea of what this example is about, let's get started. First,
install Flower in your new environment:
# In a new Python environment
$
pip
install
flwr
[
simulation
]
Then, run the command below:
$
flwr
new
@flwrlabs/quickstart-tensorflow
After running it you'll notice a new directory named
quickstart-tensorflow
has been
created. It should have the following structure:
quickstart-tensorflow
├──
tfexample
│
├──
__init__.py
│
├──
client_app.py
# Defines your ClientApp
│
├──
server_app.py
# Defines your ServerApp
│
└──
task.py
# Defines your model, training and data loading
├──
pyproject.toml
# Project metadata like dependencies and configs
└──
README.md
If you haven't yet installed the project and its dependencies, you can do so by:
# From the directory where your pyproject.toml is
$
pip
install
-e
.
To run the project, do:
# Run with default arguments and stream logs
$
flwr
run
.
--stream
Plain
flwr
run
.
submits the run, prints the run ID, and returns without streaming
logs. For the full local workflow, see
Run Flower Locally with a Managed SuperLink
.
With default arguments you will see streamed output like this:
Successfully
built
flwrlabs.quickstart-tensorflow.1-0-0.014c8eb3.fab
Starting
local
SuperLink
on
127
.0.0.1:39093...
Successfully
started
run
1859953118041441032
INFO
:
Starting
FedAvg
strategy:
INFO
:
├──
Number
of
rounds:
3
INFO
:
[
ROUND
1
/3
]
INFO
:
configure_train:
Sampled
5
nodes
(
out
of
10
)
INFO
:
aggregate_train:
Received
5
results
and
0
failures
INFO
:
└──>
Aggregated
MetricRecord:
{
'train_loss'
:
2
.0013,
'train_acc'
:
0
.2624
}
INFO
:
configure_evaluate:
Sampled
10
nodes
(
out
of
10
)
INFO
:
aggregate_evaluate:
Received
10
results
and
0
failures
INFO
:
└──>
Aggregated
MetricRecord:
{
'eval_acc'
:
0
.1216,
'eval_loss'
:
2
.2686
}
INFO
:
[
ROUND
2
/3
]
INFO
:
...
INFO
:
[
ROUND
3
/3
]
INFO
:
...
INFO
:
Strategy
execution
finished
in
16
.60s
INFO
:
Final
results:
INFO
:
ServerApp-side
Evaluate
Metrics:
INFO
:
{}
Saving
final
model
to
disk
as
final_model.keras...
You can also override the parameters defined in the
[tool.flwr.app.config]
section
in
pyproject.toml
like this:
# Override some arguments
$
flwr
run
.
--run-config
"num-server-rounds=5 batch-size=16"
The Data
¶
This tutorial uses
Flower Datasets
to easily
download and partition the
CIFAR-10
dataset. In this example you'll make use of the
IidPartitioner
to generate
num_partitions
partitions. You can choose
other partitioners
available in
Flower Datasets. Each
ClientApp
will call this function to create the
NumPy
arrays that correspond to their data partition.
partitioner
=
IidPartitioner
(
num_partitions
=
num_partitions
)
fds
=
FederatedDataset
(
dataset
=
"uoft-cs/cifar10"
,
partitioners
=
{
"train"
:
partitioner
},
)
partition
=
fds
.
load_partition
(
partition_id
,
"train"
)
partition
.
set_format
(
"numpy"
)
# Divide data on each node: 80% train, 20% test
partition
=
partition
.
train_test_split
(
test_size
=
0.2
)
x_train
,
y_train
=
partition
[
"train"
][
"img"
]
/
255.0
,
partition
[
"train"
][
"label"
]
x_test
,
y_test
=
partition
[
"test"
][
"img"
]
/
255.0
,
partition
[
"test"
][
"label"
]
The Model
¶
Next, we need a model. We defined a simple Convolutional Neural Network (CNN), but feel
free to replace it with a more sophisticated model if you'd like:
def
load_model
(
learning_rate
:
float
=
0.001
):
# Define a simple CNN for CIFAR-10 and set Adam optimizer
model
=
keras
.
Sequential
(
[
keras
.
Input
(
shape
=
(
32
,
32
,
3
)),
layers
.
Conv2D
(
32
,
kernel_size
=
(
3
,
3
),
activation
=
"relu"
),
layers
.
MaxPooling2D
(
pool_size
=
(
2
,
2
)),
layers
.
Conv2D
(
64
,
kernel_size
=
(
3
,
3
),
activation
=
"relu"
),
layers
.
MaxPooling2D
(
pool_size
=
(
2
,
2
)),
layers
.
Flatten
(),
layers
.
Dropout
(
0.5
),
layers
.
Dense
(
10
,
activation
=
"softmax"
),
]
)
optimizer
=
keras
.
optimizers
.
Adam
(
learning_rate
)
model
.
compile
(
optimizer
=
optimizer
,
loss
=
"sparse_categorical_crossentropy"
,
metrics
=
[
"accuracy"
],
)
return
model
The ClientApp
¶
The main changes we have to make to use
Tensorflow
with
Flower
have to do with
converting the
ArrayRecord
received in the
Message
into numpy ndarrays
for use with the built-in
set_weights()
function. After training, the
get_weights()
function can be used to extract then pack the updated numpy ndarrays
into a
Message
from the ClientApp. We can make use of built-in methods in the
ArrayRecord
to make these conversions:
@app
.
train
()
def
train
(
msg
:
Message
,
context
:
Context
):
# Load the model
model
=
load_model
(
context
.
run_config
[
"learning-rate"
])
# Extract the ArrayRecord from Message and convert to numpy ndarrays
model
.
set_weights
(
msg
.
content
[
"arrays"
]
.
to_numpy_ndarrays
())
# Train the model
...
# Pack the model weights into an ArrayRecord
model_record
=
ArrayRecord
(
model
.
get_weights
())
The rest of the functionality is directly inspired by the centralized case. The
ClientApp
comes with three core methods (
train
,
evaluate
, and
query
)
that we can implement for different purposes. For example:
train
to train the
received model using the local data;
evaluate
to assess its performance of the
received model on a validation set; and
query
to retrieve information about the node
executing the
ClientApp
. In this tutorial we will only make use of
train
and
evaluate
.
Let's see how the
train
method can be implemented. It receives as input arguments a
Message
from the
ServerApp
. By default it carries:
an
ArrayRecord
with the arrays of the model to federate. By default they can be
retrieved with key
"arrays"
when accessing the message content.
a
ConfigRecord
with the configuration sent from the
ServerApp
. By default it
can be retrieved with key
"config"
when accessing the message content.
The
train
method also receives the
Context
, giving access to configs for your
run and node. The run config hyperparameters are defined in the
pyproject.toml
of
your Flower App. The node config can only be set when running Flower with the Deployment
Runtime and is not directly configurable during simulations.
# Flower ClientApp
app
=
ClientApp
()
@app
.
train
()
def
train
(
msg
:
Message
,
context
:
Context
):
"""Train the model on local data."""
# Reset local Tensorflow state
keras
.
backend
.
clear_session
()
# Load the data
partition_id
=
context
.
node_config
[
"partition-id"
]
num_partitions
=
context
.
node_config
[
"num-partitions"
]
x_train
,
y_train
,
_
,
_
=
load_data
(
partition_id
,
num_partitions
)
# Load the model
model
=
load_model
(
context
.
run_config
[
"learning-rate"
])
model
.
set_weights
(
msg
.
content
[
"arrays"
]
.
to_numpy_ndarrays
())
epochs
=
context
.
run_config
[
"local-epochs"
]
batch_size
=
context
.
run_config
[
"batch-size"
]
verbose
=
context
.
run_config
.
get
(
"verbose"
)
# Train the model
history
=
model
.
fit
(
x_train
,
y_train
,
epochs
=
epochs
,
batch_size
=
batch_size
,
verbose
=
verbose
,
)
# Get training metrics
train_loss
=
history
.
history
[
"loss"
][
-
1
]
if
"loss"
in
history
.
history
else
None
train_acc
=
(
history
.
history
[
"accuracy"
][
-
1
]
if
"accuracy"
in
history
.
history
else
None
)
# Pack and send the model weights and metrics as a message
model_record
=
ArrayRecord
(
model
.
get_weights
())
metrics
=
{
"num-examples"
:
len
(
x_train
)}
if
train_loss
is
not
None
:
metrics
[
"train_loss"
]
=
train_loss
if
train_acc
is
not
None
:
metrics
[
"train_acc"
]
=
train_acc
content
=
RecordDict
({
"arrays"
:
model_record
,
"metrics"
:
MetricRecord
(
metrics
)})
return
Message
(
content
=
content
,
reply_to
=
msg
)
The
@app.evaluate()
method would be near identical with two exceptions: (1) the
model is not locally trained, instead it is used to evaluate its performance on the
locally held-out validation set; (2) including the model in the reply Message is no
longer needed because it is not locally modified.
The ServerApp
¶
To construct a
ServerApp
we define its
@app.main()
method. This method
receive as input arguments:
a
Grid
object that will be used to interface with the nodes running the
ClientApp
to involve them in a round of train/evaluate/query or other.
a
Context
object that provides access to the run configuration.
In this example we use the
FedAvg
and configure it with a specific value of
fraction_train
which is read from the run config. You can find the default value
defined in the
pyproject.toml
. Then, the execution of the strategy is launched when
invoking its
start
method. To it we pass:
the
Grid
object.
an
ArrayRecord
carrying a randomly initialized model that will serve as the global
model to federated.
the
num_rounds
parameter specifying how many rounds of
FedAvg
to perform.
# Create the ServerApp
app
=
ServerApp
()
@app
.
main
()
def
main
(
grid
:
Grid
,
context
:
Context
)
->
None
:
"""Main entry point for the ServerApp."""
# Load config
num_rounds
=
context
.
run_config
[
"num-server-rounds"
]
fraction_train
=
context
.
run_config
[
"fraction-train"
]
# Load initial model
model
=
load_model
()
arrays
=
ArrayRecord
(
model
.
get_weights
())
# Define and start FedAvg strategy
strategy
=
FedAvg
(
fraction_train
=
fraction_train
,
)
result
=
strategy
.
start
(
grid
=
grid
,
initial_arrays
=
arrays
,
num_rounds
=
num_rounds
,
)
# Save the final model
ndarrays
=
result
.
arrays
.
to_numpy_ndarrays
()
final_model_name
=
"final_model.keras"
print
(
f
"Saving final model to disk as
{
final_model_name
}
..."
)
model
.
set_weights
(
ndarrays
)
model
.
save
(
final_model_name
)
Note the
start
method of the strategy returns a result object. This object contains
all the relevant information about the FL process, including the final model weights as
an
ArrayRecord
, and federated training and evaluation metrics as
MetricRecords
.
You can easily log the metrics using Python's
pprint
and save the final model weights using
Tensorflow's
save()
function.
Congratulations! You've successfully built and run your first federated learning system.
Tip
Check the
运行模拟
documentation to learn more about how to
configure and run Flower simulations. |
| Markdown | Hide navigation sidebar
Hide table of contents sidebar
[Skip to content](https://flower.ai/docs/framework/main/zh_Hans/tutorial-quickstart-tensorflow.html#furo-main-content)
Toggle site navigation sidebar
[Flower Framework](https://flower.ai/docs/framework/main/zh_Hans/index.html)
Toggle Light / Dark / Auto color theme
Toggle table of contents sidebar
[  main](https://flower.ai/docs/framework/main/zh_Hans/index.html)
教程
- [什么是联邦学习?](https://flower.ai/docs/framework/main/zh_Hans/tutorial-series-what-is-federated-learning.html)
- [开始使用Flower](https://flower.ai/docs/framework/main/zh_Hans/tutorial-series-get-started-with-flower-pytorch.html)
- [使用联邦学习策略](https://flower.ai/docs/framework/main/zh_Hans/tutorial-series-use-a-federated-learning-strategy-pytorch.html)
- [Customize a Flower Strategy](https://flower.ai/docs/framework/main/zh_Hans/tutorial-series-build-a-strategy-from-scratch-pytorch.html)
- [Communicate custom Messages](https://flower.ai/docs/framework/main/zh_Hans/tutorial-series-customize-the-client-pytorch.html)
- [快速入门教程](https://flower.ai/docs/framework/main/zh_Hans/tutorial-quickstart.html)
Toggle navigation of 快速入门教程
- [PyTorch快速入门](https://flower.ai/docs/framework/main/zh_Hans/tutorial-quickstart-pytorch.html)
- [快速入门 TensorFlow](https://flower.ai/docs/framework/main/zh_Hans/tutorial-quickstart-tensorflow.html)
- [Quickstart MLX](https://flower.ai/docs/framework/main/zh_Hans/tutorial-quickstart-mlx.html)
- [🤗 Transformers快速入门](https://flower.ai/docs/framework/main/zh_Hans/tutorial-quickstart-huggingface.html)
- [快速入门 JAX](https://flower.ai/docs/framework/main/zh_Hans/tutorial-quickstart-jax.html)
- [快速入门Pandas](https://flower.ai/docs/framework/main/zh_Hans/tutorial-quickstart-pandas.html)
- [快速入门 fastai](https://flower.ai/docs/framework/main/zh_Hans/tutorial-quickstart-fastai.html)
- [快速入门 PyTorch Lightning](https://flower.ai/docs/framework/main/zh_Hans/tutorial-quickstart-pytorch-lightning.html)
- [scikit-learn快速入门](https://flower.ai/docs/framework/main/zh_Hans/tutorial-quickstart-scikitlearn.html)
- [XGBoost快速入门](https://flower.ai/docs/framework/main/zh_Hans/tutorial-quickstart-xgboost.html)
- [快速入门 Android](https://flower.ai/docs/framework/main/zh_Hans/tutorial-quickstart-android.html)
- [快速入门 iOS](https://flower.ai/docs/framework/main/zh_Hans/tutorial-quickstart-ios.html)
How-to Guides
- [Build](https://flower.ai/docs/framework/main/zh_Hans/build.html)
Toggle navigation of Build
- [安装Flower](https://flower.ai/docs/framework/main/zh_Hans/how-to-install-flower.html)
- [Configure `pyproject.toml`](https://flower.ai/docs/framework/main/zh_Hans/how-to-configure-pyproject-toml.html)
- [Configure a `ClientApp`](https://flower.ai/docs/framework/main/zh_Hans/how-to-configure-clients.html)
- [Design stateful ClientApps](https://flower.ai/docs/framework/main/zh_Hans/how-to-design-stateful-clients.html)
- [使用策略](https://flower.ai/docs/framework/main/zh_Hans/how-to-use-strategies.html)
- [实施策略](https://flower.ai/docs/framework/main/zh_Hans/how-to-implement-strategies.html)
- [整合评估结果](https://flower.ai/docs/framework/main/zh_Hans/how-to-aggregate-evaluation-results.html)
- [Save and load model checkpoints](https://flower.ai/docs/framework/main/zh_Hans/how-to-save-and-load-model-checkpoints.html)
- [Use Built-in Mods](https://flower.ai/docs/framework/main/zh_Hans/how-to-use-built-in-mods.html)
- [Use Differential Privacy](https://flower.ai/docs/framework/main/zh_Hans/how-to-use-differential-privacy.html)
- [Implement FedBN](https://flower.ai/docs/framework/main/zh_Hans/how-to-implement-fedbn.html)
- [Use CLI JSON output](https://flower.ai/docs/framework/main/zh_Hans/how-to-use-cli-json-output.html)
- [OpenFL Migration Guide](https://flower.ai/docs/framework/main/zh_Hans/how-to-migrate-from-openfl.html)
- [升级至 Flower 1.0](https://flower.ai/docs/framework/main/zh_Hans/how-to-upgrade-to-flower-1.0.html)
- [Upgrade to Flower 1.13](https://flower.ai/docs/framework/main/zh_Hans/how-to-upgrade-to-flower-1.13.html)
- [Upgrade to Message API](https://flower.ai/docs/framework/main/zh_Hans/how-to-upgrade-to-message-api.html)
- [Simulate](https://flower.ai/docs/framework/main/zh_Hans/simulate.html)
Toggle navigation of Simulate
- [Run Flower Locally with a Managed SuperLink](https://flower.ai/docs/framework/main/zh_Hans/how-to-run-flower-locally.html)
- [运行模拟](https://flower.ai/docs/framework/main/zh_Hans/how-to-run-simulations.html)
- [Deploy](https://flower.ai/docs/framework/main/zh_Hans/deploy.html)
Toggle navigation of Deploy
- [Run Flower with the Deployment Runtime](https://flower.ai/docs/framework/main/zh_Hans/how-to-run-flower-with-deployment-engine.html)
- [Enable TLS connections](https://flower.ai/docs/framework/main/zh_Hans/how-to-enable-tls-connections.html)
- [Authenticate SuperNodes](https://flower.ai/docs/framework/main/zh_Hans/how-to-authenticate-supernodes.html)
- [Configure logging](https://flower.ai/docs/framework/main/zh_Hans/how-to-configure-logging.html)
- [Run Flower on GCP](https://flower.ai/docs/framework/main/zh_Hans/how-to-run-flower-on-gcp.html)
- [Run Flower on Azure](https://flower.ai/docs/framework/main/zh_Hans/how-to-run-flower-on-azure.html)
- [Run Flower on Red Hat OpenShift](https://flower.ai/docs/framework/main/zh_Hans/how-to-run-flower-on-red-hat-openshift.html)
- [Run Flower on Multiple OpenShift Clusters](https://flower.ai/docs/framework/main/zh_Hans/how-to-run-flower-on-multiple-openshift-clusters.html)
- [Authenticate Accounts via OpenID Connect](https://flower.ai/docs/framework/main/zh_Hans/how-to-authenticate-accounts.html)
- [Configure Audit Logging](https://flower.ai/docs/framework/main/zh_Hans/how-to-configure-audit-logging.html)
- [Manage Flower Federations](https://flower.ai/docs/framework/main/zh_Hans/how-to-manage-flower-federations.html)
- [Run Flower using Docker](https://flower.ai/docs/framework/main/zh_Hans/docker/index.html)
Toggle navigation of Run Flower using Docker
- [Quickstart with Docker](https://flower.ai/docs/framework/main/zh_Hans/docker/tutorial-quickstart-docker.html)
- [Enable TLS for Secure Connections](https://flower.ai/docs/framework/main/zh_Hans/docker/enable-tls.html)
- [Persist the State of the SuperLink](https://flower.ai/docs/framework/main/zh_Hans/docker/persist-superlink-state.html)
- [Set Environment Variables](https://flower.ai/docs/framework/main/zh_Hans/docker/set-environment-variables.html)
- [Run with Root User Privileges](https://flower.ai/docs/framework/main/zh_Hans/docker/run-as-root-user.html)
- [Run ServerApp or ClientApp as a Subprocess](https://flower.ai/docs/framework/main/zh_Hans/docker/run-as-subprocess.html)
- [Pin a Docker Image to a Specific Version](https://flower.ai/docs/framework/main/zh_Hans/docker/pin-version.html)
- [Use a Different Flower Version](https://flower.ai/docs/framework/main/zh_Hans/docker/use-a-different-version.html)
- [Quickstart with Docker Compose](https://flower.ai/docs/framework/main/zh_Hans/docker/tutorial-quickstart-docker-compose.html)
- [Run Flower Quickstart Examples with Docker Compose](https://flower.ai/docs/framework/main/zh_Hans/docker/run-quickstart-examples-docker-compose.html)
- [Deploy Flower on Multiple Machines with Docker Compose](https://flower.ai/docs/framework/main/zh_Hans/docker/tutorial-deploy-on-multiple-machines.html)
- [Run Flower using Helm](https://flower.ai/docs/framework/main/zh_Hans/helm/index.html)
Toggle navigation of Run Flower using Helm
- [Deploy SuperLink](https://flower.ai/docs/framework/main/zh_Hans/helm/how-to-deploy-superlink-using-helm.html)
- [Deploy SuperNode](https://flower.ai/docs/framework/main/zh_Hans/helm/how-to-deploy-supernode-using-helm.html)
说明
- [联邦学习评估](https://flower.ai/docs/framework/main/zh_Hans/explanation-federated-evaluation.html)
- [差分隐私](https://flower.ai/docs/framework/main/zh_Hans/explanation-differential-privacy.html)
- [安全聚合协议](https://flower.ai/docs/framework/main/zh_Hans/explanation-ref-secure-aggregation-protocols.html)
- [Flower的架构](https://flower.ai/docs/framework/main/zh_Hans/explanation-flower-architecture.html)
- [Flower Strategy Abstraction](https://flower.ai/docs/framework/main/zh_Hans/explanation-flower-strategy-abstraction.html)
参考资料
- [Reference](https://flower.ai/docs/framework/main/zh_Hans/reference.html)
Toggle navigation of Reference
- [flwr](https://flower.ai/docs/framework/main/zh_Hans/ref-api/flwr.html)
Toggle navigation of flwr
- [app](https://flower.ai/docs/framework/main/zh_Hans/ref-api/flwr.app.html)
Toggle navigation of app
- [Array](https://flower.ai/docs/framework/main/zh_Hans/ref-api/flwr.app.Array.html)
- [ArrayRecord](https://flower.ai/docs/framework/main/zh_Hans/ref-api/flwr.app.ArrayRecord.html)
- [ConfigRecord](https://flower.ai/docs/framework/main/zh_Hans/ref-api/flwr.app.ConfigRecord.html)
- [Context](https://flower.ai/docs/framework/main/zh_Hans/ref-api/flwr.app.Context.html)
- [Error](https://flower.ai/docs/framework/main/zh_Hans/ref-api/flwr.app.Error.html)
- [Message](https://flower.ai/docs/framework/main/zh_Hans/ref-api/flwr.app.Message.html)
- [MessageType](https://flower.ai/docs/framework/main/zh_Hans/ref-api/flwr.app.MessageType.html)
- [描述数据](https://flower.ai/docs/framework/main/zh_Hans/ref-api/flwr.app.Metadata.html)
- [MetricRecord](https://flower.ai/docs/framework/main/zh_Hans/ref-api/flwr.app.MetricRecord.html)
- [RecordDict](https://flower.ai/docs/framework/main/zh_Hans/ref-api/flwr.app.RecordDict.html)
- [UserConfig](https://flower.ai/docs/framework/main/zh_Hans/ref-api/flwr.app.UserConfig.html)
- [clientapp](https://flower.ai/docs/framework/main/zh_Hans/ref-api/flwr.clientapp.html)
Toggle navigation of clientapp
- [ClientApp](https://flower.ai/docs/framework/main/zh_Hans/ref-api/flwr.clientapp.ClientApp.html)
- [mod](https://flower.ai/docs/framework/main/zh_Hans/ref-api/flwr.clientapp.mod.html)
Toggle navigation of mod
- [adaptiveclipping\_mod](https://flower.ai/docs/framework/main/zh_Hans/ref-api/flwr.clientapp.mod.adaptiveclipping_mod.html)
- [arrays\_size\_mod](https://flower.ai/docs/framework/main/zh_Hans/ref-api/flwr.clientapp.mod.arrays_size_mod.html)
- [fixedclipping\_mod](https://flower.ai/docs/framework/main/zh_Hans/ref-api/flwr.clientapp.mod.fixedclipping_mod.html)
- [message\_size\_mod](https://flower.ai/docs/framework/main/zh_Hans/ref-api/flwr.clientapp.mod.message_size_mod.html)
- [LocalDpMod](https://flower.ai/docs/framework/main/zh_Hans/ref-api/flwr.clientapp.mod.LocalDpMod.html)
- [serverapp](https://flower.ai/docs/framework/main/zh_Hans/ref-api/flwr.serverapp.html)
Toggle navigation of serverapp
- [Grid](https://flower.ai/docs/framework/main/zh_Hans/ref-api/flwr.serverapp.Grid.html)
- [ServerApp](https://flower.ai/docs/framework/main/zh_Hans/ref-api/flwr.serverapp.ServerApp.html)
- [strategy](https://flower.ai/docs/framework/main/zh_Hans/ref-api/flwr.serverapp.strategy.html)
Toggle navigation of strategy
- [Bulyan](https://flower.ai/docs/framework/main/zh_Hans/ref-api/flwr.serverapp.strategy.Bulyan.html)
- [DifferentialPrivacyClientSideAdaptiveClipping](https://flower.ai/docs/framework/main/zh_Hans/ref-api/flwr.serverapp.strategy.DifferentialPrivacyClientSideAdaptiveClipping.html)
- [DifferentialPrivacyClientSideFixedClipping](https://flower.ai/docs/framework/main/zh_Hans/ref-api/flwr.serverapp.strategy.DifferentialPrivacyClientSideFixedClipping.html)
- [DifferentialPrivacyServerSideAdaptiveClipping](https://flower.ai/docs/framework/main/zh_Hans/ref-api/flwr.serverapp.strategy.DifferentialPrivacyServerSideAdaptiveClipping.html)
- [DifferentialPrivacyServerSideFixedClipping](https://flower.ai/docs/framework/main/zh_Hans/ref-api/flwr.serverapp.strategy.DifferentialPrivacyServerSideFixedClipping.html)
- [FedAdagrad](https://flower.ai/docs/framework/main/zh_Hans/ref-api/flwr.serverapp.strategy.FedAdagrad.html)
- [FedAdam](https://flower.ai/docs/framework/main/zh_Hans/ref-api/flwr.serverapp.strategy.FedAdam.html)
- [FedAvg](https://flower.ai/docs/framework/main/zh_Hans/ref-api/flwr.serverapp.strategy.FedAvg.html)
- [FedAvgM](https://flower.ai/docs/framework/main/zh_Hans/ref-api/flwr.serverapp.strategy.FedAvgM.html)
- [FedMedian](https://flower.ai/docs/framework/main/zh_Hans/ref-api/flwr.serverapp.strategy.FedMedian.html)
- [FedProx](https://flower.ai/docs/framework/main/zh_Hans/ref-api/flwr.serverapp.strategy.FedProx.html)
- [FedTrimmedAvg](https://flower.ai/docs/framework/main/zh_Hans/ref-api/flwr.serverapp.strategy.FedTrimmedAvg.html)
- [FedXgbBagging](https://flower.ai/docs/framework/main/zh_Hans/ref-api/flwr.serverapp.strategy.FedXgbBagging.html)
- [FedXgbCyclic](https://flower.ai/docs/framework/main/zh_Hans/ref-api/flwr.serverapp.strategy.FedXgbCyclic.html)
- [FedYogi](https://flower.ai/docs/framework/main/zh_Hans/ref-api/flwr.serverapp.strategy.FedYogi.html)
- [Krum](https://flower.ai/docs/framework/main/zh_Hans/ref-api/flwr.serverapp.strategy.Krum.html)
- [MultiKrum](https://flower.ai/docs/framework/main/zh_Hans/ref-api/flwr.serverapp.strategy.MultiKrum.html)
- [QFedAvg](https://flower.ai/docs/framework/main/zh_Hans/ref-api/flwr.serverapp.strategy.QFedAvg.html)
- [Result](https://flower.ai/docs/framework/main/zh_Hans/ref-api/flwr.serverapp.strategy.Result.html)
- [Strategy](https://flower.ai/docs/framework/main/zh_Hans/ref-api/flwr.serverapp.strategy.Strategy.html)
- [Flower CLI 参考](https://flower.ai/docs/framework/main/zh_Hans/ref-api-cli.html)
- [Flower Configuration](https://flower.ai/docs/framework/main/zh_Hans/ref-flower-configuration.html)
- [项目实例](https://flower.ai/docs/framework/main/zh_Hans/ref-example-projects.html)
- [遥测功能](https://flower.ai/docs/framework/main/zh_Hans/ref-telemetry.html)
- [更新日志](https://flower.ai/docs/framework/main/zh_Hans/ref-changelog.html)
- [Flower Runtime Comparison](https://flower.ai/docs/framework/main/zh_Hans/ref-flower-runtime-comparison.html)
- [Flower Network Communication](https://flower.ai/docs/framework/main/zh_Hans/ref-flower-network-communication.html)
- [Exit Codes](https://flower.ai/docs/framework/main/zh_Hans/ref-exit-codes-dir.html)
Toggle navigation of Exit Codes
- [\[0\] SUCCESS](https://flower.ai/docs/framework/main/zh_Hans/ref-exit-codes/0.html)
- [\[1\] GRACEFUL\_EXIT\_SIGINT](https://flower.ai/docs/framework/main/zh_Hans/ref-exit-codes/1.html)
- [\[100\] SUPERLINK\_THREAD\_CRASH](https://flower.ai/docs/framework/main/zh_Hans/ref-exit-codes/100.html)
- [\[101\] SUPERLINK\_LICENSE\_INVALID](https://flower.ai/docs/framework/main/zh_Hans/ref-exit-codes/101.html)
- [\[102\] SUPERLINK\_LICENSE\_MISSING](https://flower.ai/docs/framework/main/zh_Hans/ref-exit-codes/102.html)
- [\[103\] SUPERLINK\_LICENSE\_URL\_INVALID](https://flower.ai/docs/framework/main/zh_Hans/ref-exit-codes/103.html)
- [\[104\] SUPERLINK\_INVALID\_ARGS](https://flower.ai/docs/framework/main/zh_Hans/ref-exit-codes/104.html)
- [\[105\] SUPERLINK\_DATABASE\_SCHEMA\_MISMATCH](https://flower.ai/docs/framework/main/zh_Hans/ref-exit-codes/105.html)
- [\[2\] GRACEFUL\_EXIT\_SIGQUIT](https://flower.ai/docs/framework/main/zh_Hans/ref-exit-codes/2.html)
- [\[200\] SERVERAPP\_STRATEGY\_PRECONDITION\_UNMET](https://flower.ai/docs/framework/main/zh_Hans/ref-exit-codes/200.html)
- [\[201\] SERVERAPP\_EXCEPTION](https://flower.ai/docs/framework/main/zh_Hans/ref-exit-codes/201.html)
- [\[202\] SERVERAPP\_STRATEGY\_AGGREGATION\_ERROR](https://flower.ai/docs/framework/main/zh_Hans/ref-exit-codes/202.html)
- [\[203\] SERVERAPP\_RUN\_START\_REJECTED](https://flower.ai/docs/framework/main/zh_Hans/ref-exit-codes/203.html)
- [\[3\] GRACEFUL\_EXIT\_SIGTERM](https://flower.ai/docs/framework/main/zh_Hans/ref-exit-codes/3.html)
- [\[300\] SUPERNODE\_REST\_ADDRESS\_INVALID](https://flower.ai/docs/framework/main/zh_Hans/ref-exit-codes/300.html)
- [\[302\] SUPERNODE\_NODE\_AUTH\_KEY\_INVALID](https://flower.ai/docs/framework/main/zh_Hans/ref-exit-codes/302.html)
- [\[303\] SUPERNODE\_STARTED\_WITHOUT\_TLS\_BUT\_NODE\_AUTH\_ENABLED](https://flower.ai/docs/framework/main/zh_Hans/ref-exit-codes/303.html)
- [\[304\] SUPERNODE\_INVALID\_TRUSTED\_ENTITIES](https://flower.ai/docs/framework/main/zh_Hans/ref-exit-codes/304.html)
- [\[400\] SUPEREXEC\_INVALID\_PLUGIN\_CONFIG](https://flower.ai/docs/framework/main/zh_Hans/ref-exit-codes/400.html)
- [\[500\] FLWRCLI\_NODE\_AUTH\_PUBLIC\_KEY\_INVALID](https://flower.ai/docs/framework/main/zh_Hans/ref-exit-codes/500.html)
- [\[600\] COMMON\_ADDRESS\_INVALID](https://flower.ai/docs/framework/main/zh_Hans/ref-exit-codes/600.html)
- [\[601\] COMMON\_MISSING\_EXTRA\_REST](https://flower.ai/docs/framework/main/zh_Hans/ref-exit-codes/601.html)
- [\[602\] COMMON\_TLS\_NOT\_SUPPORTED](https://flower.ai/docs/framework/main/zh_Hans/ref-exit-codes/602.html)
- [\[700\] SIMULATION\_EXCEPTION](https://flower.ai/docs/framework/main/zh_Hans/ref-exit-codes/700.html)
- [\[701\] SIMULATION\_MISSING\_EXTRA](https://flower.ai/docs/framework/main/zh_Hans/ref-exit-codes/701.html)
- [常见问题](https://flower.ai/docs/framework/main/zh_Hans/ref-faq.html)
贡献者文档
- [Contribute](https://flower.ai/docs/framework/main/zh_Hans/contribute.html)
Toggle navigation of Contribute
- [在 GitHub 上投稿](https://flower.ai/docs/framework/main/zh_Hans/contributor-tutorial-contribute-on-github.html)
- [成为贡献者](https://flower.ai/docs/framework/main/zh_Hans/contributor-tutorial-get-started-as-a-contributor.html)
- [安装开发版本](https://flower.ai/docs/framework/main/zh_Hans/contributor-how-to-install-development-versions.html)
- [建立虚拟环境](https://flower.ai/docs/framework/main/zh_Hans/contributor-how-to-set-up-a-virtual-env.html)
- [使用 VSCode Dev Containers 进行开发](https://flower.ai/docs/framework/main/zh_Hans/contributor-how-to-develop-in-vscode-dev-containers.html)
- [编写文件](https://flower.ai/docs/framework/main/zh_Hans/contributor-how-to-write-documentation.html)
- [发布 Flower](https://flower.ai/docs/framework/main/zh_Hans/contributor-how-to-release-flower.html)
- [贡献译文](https://flower.ai/docs/framework/main/zh_Hans/contributor-how-to-contribute-translations.html)
- [How to Build Docker Flower Images Locally](https://flower.ai/docs/framework/main/zh_Hans/contributor-how-to-build-docker-images.html)
- [Migrate Flower Database Schema](https://flower.ai/docs/framework/main/zh_Hans/contributor-how-to-migrate-database.html)
- [Public and private APIs](https://flower.ai/docs/framework/main/zh_Hans/contributor-explanation-public-and-private-apis.html)
- [首次代码贡献](https://flower.ai/docs/framework/main/zh_Hans/contributor-ref-good-first-contributions.html)
Versions
- [main](https://flower.ai/docs/framework/main/zh_Hans/tutorial-quickstart-tensorflow.html)
[🇬🇧](https://flower.ai/docs/framework/main/en/tutorial-quickstart-tensorflow.html) [🇫🇷](https://flower.ai/docs/framework/main/fr/tutorial-quickstart-tensorflow.html) [🇨🇳](https://flower.ai/docs/framework/main/zh_Hans/tutorial-quickstart-tensorflow.html) [🇰🇷](https://flower.ai/docs/framework/main/ko/tutorial-quickstart-tensorflow.html)
[Back to top](https://flower.ai/docs/framework/main/zh_Hans/tutorial-quickstart-tensorflow.html)
[View this page](https://flower.ai/docs/framework/main/zh_Hans/_sources/tutorial-quickstart-tensorflow.rst.txt "View this page")
Toggle Light / Dark / Auto color theme
Toggle table of contents sidebar
# 快速入门 TensorFlow[¶](https://flower.ai/docs/framework/main/zh_Hans/tutorial-quickstart-tensorflow.html#id1 "Link to this heading")
In this tutorial we will learn how to train a Convolutional Neural Network on CIFAR-10 using the Flower framework and TensorFlow. First of all, it is recommended to create a virtual environment and run everything within a [virtualenv](https://flower.ai/docs/framework/main/zh_Hans/contributor-how-to-set-up-a-virtual-env.html).
Let's use `flwr new` to create a complete Flower+TensorFlow project. It will generate all the files needed to run a federation of 10 nodes using [`FedAvg`](https://flower.ai/docs/framework/main/zh_Hans/ref-api/flwr.serverapp.strategy.FedAvg.html). By default, the generated app uses a local simulation profile that `flwr run` submits to a managed local SuperLink, which then executes the run with the Flower Simulation Runtime. The dataset will be partitioned using Flower Dataset's [IidPartitioner](https://flower.ai/docs/datasets/ref-api/flwr_datasets.partitioner.IidPartitioner.html#flwr_datasets.partitioner.IidPartitioner).
Now that we have a rough idea of what this example is about, let's get started. First, install Flower in your new environment:
```
# In a new Python environment
$ pip install flwr[simulation]
```
Then, run the command below:
```
$ flwr new @flwrlabs/quickstart-tensorflow
```
After running it you'll notice a new directory named `quickstart-tensorflow` has been created. It should have the following structure:
```
quickstart-tensorflow
├── tfexample
│ ├── __init__.py
│ ├── client_app.py # Defines your ClientApp
│ ├── server_app.py # Defines your ServerApp
│ └── task.py # Defines your model, training and data loading
├── pyproject.toml # Project metadata like dependencies and configs
└── README.md
```
If you haven't yet installed the project and its dependencies, you can do so by:
```
# From the directory where your pyproject.toml is
$ pip install -e .
```
To run the project, do:
```
# Run with default arguments and stream logs
$ flwr run . --stream
```
Plain `flwr run .` submits the run, prints the run ID, and returns without streaming logs. For the full local workflow, see [Run Flower Locally with a Managed SuperLink](https://flower.ai/docs/framework/main/zh_Hans/how-to-run-flower-locally.html).
With default arguments you will see streamed output like this:
```
Successfully built flwrlabs.quickstart-tensorflow.1-0-0.014c8eb3.fab
Starting local SuperLink on 127.0.0.1:39093...
Successfully started run 1859953118041441032
INFO : Starting FedAvg strategy:
INFO : ├── Number of rounds: 3
INFO : [ROUND 1/3]
INFO : configure_train: Sampled 5 nodes (out of 10)
INFO : aggregate_train: Received 5 results and 0 failures
INFO : └──> Aggregated MetricRecord: {'train_loss': 2.0013, 'train_acc': 0.2624}
INFO : configure_evaluate: Sampled 10 nodes (out of 10)
INFO : aggregate_evaluate: Received 10 results and 0 failures
INFO : └──> Aggregated MetricRecord: {'eval_acc': 0.1216, 'eval_loss': 2.2686}
INFO : [ROUND 2/3]
INFO : ...
INFO : [ROUND 3/3]
INFO : ...
INFO : Strategy execution finished in 16.60s
INFO : Final results:
INFO : ServerApp-side Evaluate Metrics:
INFO : {}
Saving final model to disk as final_model.keras...
```
You can also override the parameters defined in the `[tool.flwr.app.config]` section in `pyproject.toml` like this:
```
# Override some arguments
$ flwr run . --run-config "num-server-rounds=5 batch-size=16"
```
## The Data[¶](https://flower.ai/docs/framework/main/zh_Hans/tutorial-quickstart-tensorflow.html#the-data "Link to this heading")
This tutorial uses [Flower Datasets](https://flower.ai/docs/datasets/) to easily download and partition the CIFAR-10 dataset. In this example you'll make use of the [IidPartitioner](https://flower.ai/docs/datasets/ref-api/flwr_datasets.partitioner.IidPartitioner.html#flwr_datasets.partitioner.IidPartitioner) to generate num\_partitions partitions. You can choose [other partitioners](https://flower.ai/docs/datasets/ref-api/flwr_datasets.partitioner.html) available in Flower Datasets. Each `ClientApp` will call this function to create the `NumPy` arrays that correspond to their data partition.
```
partitioner = IidPartitioner(num_partitions=num_partitions)
fds = FederatedDataset(
dataset="uoft-cs/cifar10",
partitioners={"train": partitioner},
)
partition = fds.load_partition(partition_id, "train")
partition.set_format("numpy")
# Divide data on each node: 80% train, 20% test
partition = partition.train_test_split(test_size=0.2)
x_train, y_train = partition["train"]["img"] / 255.0, partition["train"]["label"]
x_test, y_test = partition["test"]["img"] / 255.0, partition["test"]["label"]
```
## The Model[¶](https://flower.ai/docs/framework/main/zh_Hans/tutorial-quickstart-tensorflow.html#the-model "Link to this heading")
Next, we need a model. We defined a simple Convolutional Neural Network (CNN), but feel free to replace it with a more sophisticated model if you'd like:
```
def load_model(learning_rate: float = 0.001):
# Define a simple CNN for CIFAR-10 and set Adam optimizer
model = keras.Sequential(
[
keras.Input(shape=(32, 32, 3)),
layers.Conv2D(32, kernel_size=(3, 3), activation="relu"),
layers.MaxPooling2D(pool_size=(2, 2)),
layers.Conv2D(64, kernel_size=(3, 3), activation="relu"),
layers.MaxPooling2D(pool_size=(2, 2)),
layers.Flatten(),
layers.Dropout(0.5),
layers.Dense(10, activation="softmax"),
]
)
optimizer = keras.optimizers.Adam(learning_rate)
model.compile(
optimizer=optimizer,
loss="sparse_categorical_crossentropy",
metrics=["accuracy"],
)
return model
```
## The ClientApp[¶](https://flower.ai/docs/framework/main/zh_Hans/tutorial-quickstart-tensorflow.html#the-clientapp "Link to this heading")
The main changes we have to make to use Tensorflow with Flower have to do with converting the [`ArrayRecord`](https://flower.ai/docs/framework/main/zh_Hans/ref-api/flwr.app.ArrayRecord.html) received in the [`Message`](https://flower.ai/docs/framework/main/zh_Hans/ref-api/flwr.app.Message.html) into numpy ndarrays for use with the built-in `set_weights()` function. After training, the `get_weights()` function can be used to extract then pack the updated numpy ndarrays into a `Message` from the ClientApp. We can make use of built-in methods in the `ArrayRecord` to make these conversions:
```
@app.train()
def train(msg: Message, context: Context):
# Load the model
model = load_model(context.run_config["learning-rate"])
# Extract the ArrayRecord from Message and convert to numpy ndarrays
model.set_weights(msg.content["arrays"].to_numpy_ndarrays())
# Train the model
...
# Pack the model weights into an ArrayRecord
model_record = ArrayRecord(model.get_weights())
```
The rest of the functionality is directly inspired by the centralized case. The [`ClientApp`](https://flower.ai/docs/framework/main/zh_Hans/ref-api/flwr.clientapp.ClientApp.html) comes with three core methods (`train`, `evaluate`, and `query`) that we can implement for different purposes. For example: `train` to train the received model using the local data; `evaluate` to assess its performance of the received model on a validation set; and `query` to retrieve information about the node executing the `ClientApp`. In this tutorial we will only make use of `train` and `evaluate`.
Let's see how the `train` method can be implemented. It receives as input arguments a [`Message`](https://flower.ai/docs/framework/main/zh_Hans/ref-api/flwr.app.Message.html) from the `ServerApp`. By default it carries:
- an `ArrayRecord` with the arrays of the model to federate. By default they can be retrieved with key `"arrays"` when accessing the message content.
- a `ConfigRecord` with the configuration sent from the `ServerApp`. By default it can be retrieved with key `"config"` when accessing the message content.
The `train` method also receives the `Context`, giving access to configs for your run and node. The run config hyperparameters are defined in the `pyproject.toml` of your Flower App. The node config can only be set when running Flower with the Deployment Runtime and is not directly configurable during simulations.
```
# Flower ClientApp
app = ClientApp()
@app.train()
def train(msg: Message, context: Context):
"""Train the model on local data."""
# Reset local Tensorflow state
keras.backend.clear_session()
# Load the data
partition_id = context.node_config["partition-id"]
num_partitions = context.node_config["num-partitions"]
x_train, y_train, _, _ = load_data(partition_id, num_partitions)
# Load the model
model = load_model(context.run_config["learning-rate"])
model.set_weights(msg.content["arrays"].to_numpy_ndarrays())
epochs = context.run_config["local-epochs"]
batch_size = context.run_config["batch-size"]
verbose = context.run_config.get("verbose")
# Train the model
history = model.fit(
x_train,
y_train,
epochs=epochs,
batch_size=batch_size,
verbose=verbose,
)
# Get training metrics
train_loss = history.history["loss"][-1] if "loss" in history.history else None
train_acc = (
history.history["accuracy"][-1] if "accuracy" in history.history else None
)
# Pack and send the model weights and metrics as a message
model_record = ArrayRecord(model.get_weights())
metrics = {"num-examples": len(x_train)}
if train_loss is not None:
metrics["train_loss"] = train_loss
if train_acc is not None:
metrics["train_acc"] = train_acc
content = RecordDict({"arrays": model_record, "metrics": MetricRecord(metrics)})
return Message(content=content, reply_to=msg)
```
The `@app.evaluate()` method would be near identical with two exceptions: (1) the model is not locally trained, instead it is used to evaluate its performance on the locally held-out validation set; (2) including the model in the reply Message is no longer needed because it is not locally modified.
## The ServerApp[¶](https://flower.ai/docs/framework/main/zh_Hans/tutorial-quickstart-tensorflow.html#the-serverapp "Link to this heading")
To construct a [`ServerApp`](https://flower.ai/docs/framework/main/zh_Hans/ref-api/flwr.serverapp.ServerApp.html) we define its `@app.main()` method. This method receive as input arguments:
- a `Grid` object that will be used to interface with the nodes running the `ClientApp` to involve them in a round of train/evaluate/query or other.
- a `Context` object that provides access to the run configuration.
In this example we use the [`FedAvg`](https://flower.ai/docs/framework/main/zh_Hans/ref-api/flwr.serverapp.strategy.FedAvg.html) and configure it with a specific value of `fraction_train` which is read from the run config. You can find the default value defined in the `pyproject.toml`. Then, the execution of the strategy is launched when invoking its [`start`](https://flower.ai/docs/framework/main/zh_Hans/ref-api/flwr.serverapp.Strategy.html#flwr.serverapp.Strategy.start) method. To it we pass:
- the `Grid` object.
- an `ArrayRecord` carrying a randomly initialized model that will serve as the global model to federated.
- the `num_rounds` parameter specifying how many rounds of `FedAvg` to perform.
```
# Create the ServerApp
app = ServerApp()
@app.main()
def main(grid: Grid, context: Context) -> None:
"""Main entry point for the ServerApp."""
# Load config
num_rounds = context.run_config["num-server-rounds"]
fraction_train = context.run_config["fraction-train"]
# Load initial model
model = load_model()
arrays = ArrayRecord(model.get_weights())
# Define and start FedAvg strategy
strategy = FedAvg(
fraction_train=fraction_train,
)
result = strategy.start(
grid=grid,
initial_arrays=arrays,
num_rounds=num_rounds,
)
# Save the final model
ndarrays = result.arrays.to_numpy_ndarrays()
final_model_name = "final_model.keras"
print(f"Saving final model to disk as {final_model_name}...")
model.set_weights(ndarrays)
model.save(final_model_name)
```
Note the `start` method of the strategy returns a result object. This object contains all the relevant information about the FL process, including the final model weights as an `ArrayRecord`, and federated training and evaluation metrics as `MetricRecords`. You can easily log the metrics using Python's [pprint](https://docs.python.org/3/library/pprint.html) and save the final model weights using Tensorflow's `save()` function.
Congratulations! You've successfully built and run your first federated learning system.
Tip
Check the [运行模拟](https://flower.ai/docs/framework/main/zh_Hans/how-to-run-simulations.html) documentation to learn more about how to configure and run Flower simulations.
Note
Check the source code of the extended version of this tutorial in [`examples/quickstart-tensorflow`](https://github.com/flwrlabs/flower/blob/main/examples/quickstart-tensorflow) in the Flower GitHub repository.
[Next Quickstart MLX](https://flower.ai/docs/framework/main/zh_Hans/tutorial-quickstart-mlx.html)
[Previous PyTorch快速入门](https://flower.ai/docs/framework/main/zh_Hans/tutorial-quickstart-pytorch.html)
Copyright © 2026 Flower Labs GmbH
Made with [Sphinx](https://www.sphinx-doc.org/) and [@pradyunsg](https://pradyunsg.me/)'s [Furo](https://github.com/pradyunsg/furo)
On this page
- [快速入门 TensorFlow](https://flower.ai/docs/framework/main/zh_Hans/tutorial-quickstart-tensorflow.html)
- [The Data](https://flower.ai/docs/framework/main/zh_Hans/tutorial-quickstart-tensorflow.html#the-data)
- [The Model](https://flower.ai/docs/framework/main/zh_Hans/tutorial-quickstart-tensorflow.html#the-model)
- [The ClientApp](https://flower.ai/docs/framework/main/zh_Hans/tutorial-quickstart-tensorflow.html#the-clientapp)
- [The ServerApp](https://flower.ai/docs/framework/main/zh_Hans/tutorial-quickstart-tensorflow.html#the-serverapp) |
| Readable Markdown | In this tutorial we will learn how to train a Convolutional Neural Network on CIFAR-10 using the Flower framework and TensorFlow. First of all, it is recommended to create a virtual environment and run everything within a [virtualenv](https://flower.ai/docs/framework/main/zh_Hans/contributor-how-to-set-up-a-virtual-env.html).
Let's use `flwr new` to create a complete Flower+TensorFlow project. It will generate all the files needed to run a federation of 10 nodes using [`FedAvg`](https://flower.ai/docs/framework/main/zh_Hans/ref-api/flwr.serverapp.strategy.FedAvg.html). By default, the generated app uses a local simulation profile that `flwr run` submits to a managed local SuperLink, which then executes the run with the Flower Simulation Runtime. The dataset will be partitioned using Flower Dataset's [IidPartitioner](https://flower.ai/docs/datasets/ref-api/flwr_datasets.partitioner.IidPartitioner.html#flwr_datasets.partitioner.IidPartitioner).
Now that we have a rough idea of what this example is about, let's get started. First, install Flower in your new environment:
```
# In a new Python environment
$ pip install flwr[simulation]
```
Then, run the command below:
```
$ flwr new @flwrlabs/quickstart-tensorflow
```
After running it you'll notice a new directory named `quickstart-tensorflow` has been created. It should have the following structure:
```
quickstart-tensorflow
├── tfexample
│ ├── __init__.py
│ ├── client_app.py # Defines your ClientApp
│ ├── server_app.py # Defines your ServerApp
│ └── task.py # Defines your model, training and data loading
├── pyproject.toml # Project metadata like dependencies and configs
└── README.md
```
If you haven't yet installed the project and its dependencies, you can do so by:
```
# From the directory where your pyproject.toml is
$ pip install -e .
```
To run the project, do:
```
# Run with default arguments and stream logs
$ flwr run . --stream
```
Plain `flwr run .` submits the run, prints the run ID, and returns without streaming logs. For the full local workflow, see [Run Flower Locally with a Managed SuperLink](https://flower.ai/docs/framework/main/zh_Hans/how-to-run-flower-locally.html).
With default arguments you will see streamed output like this:
```
Successfully built flwrlabs.quickstart-tensorflow.1-0-0.014c8eb3.fab
Starting local SuperLink on 127.0.0.1:39093...
Successfully started run 1859953118041441032
INFO : Starting FedAvg strategy:
INFO : ├── Number of rounds: 3
INFO : [ROUND 1/3]
INFO : configure_train: Sampled 5 nodes (out of 10)
INFO : aggregate_train: Received 5 results and 0 failures
INFO : └──> Aggregated MetricRecord: {'train_loss': 2.0013, 'train_acc': 0.2624}
INFO : configure_evaluate: Sampled 10 nodes (out of 10)
INFO : aggregate_evaluate: Received 10 results and 0 failures
INFO : └──> Aggregated MetricRecord: {'eval_acc': 0.1216, 'eval_loss': 2.2686}
INFO : [ROUND 2/3]
INFO : ...
INFO : [ROUND 3/3]
INFO : ...
INFO : Strategy execution finished in 16.60s
INFO : Final results:
INFO : ServerApp-side Evaluate Metrics:
INFO : {}
Saving final model to disk as final_model.keras...
```
You can also override the parameters defined in the `[tool.flwr.app.config]` section in `pyproject.toml` like this:
```
# Override some arguments
$ flwr run . --run-config "num-server-rounds=5 batch-size=16"
```
## The Data[¶](https://flower.ai/docs/framework/main/zh_Hans/tutorial-quickstart-tensorflow.html#the-data "Link to this heading")
This tutorial uses [Flower Datasets](https://flower.ai/docs/datasets/) to easily download and partition the CIFAR-10 dataset. In this example you'll make use of the [IidPartitioner](https://flower.ai/docs/datasets/ref-api/flwr_datasets.partitioner.IidPartitioner.html#flwr_datasets.partitioner.IidPartitioner) to generate num\_partitions partitions. You can choose [other partitioners](https://flower.ai/docs/datasets/ref-api/flwr_datasets.partitioner.html) available in Flower Datasets. Each `ClientApp` will call this function to create the `NumPy` arrays that correspond to their data partition.
```
partitioner = IidPartitioner(num_partitions=num_partitions)
fds = FederatedDataset(
dataset="uoft-cs/cifar10",
partitioners={"train": partitioner},
)
partition = fds.load_partition(partition_id, "train")
partition.set_format("numpy")
# Divide data on each node: 80% train, 20% test
partition = partition.train_test_split(test_size=0.2)
x_train, y_train = partition["train"]["img"] / 255.0, partition["train"]["label"]
x_test, y_test = partition["test"]["img"] / 255.0, partition["test"]["label"]
```
## The Model[¶](https://flower.ai/docs/framework/main/zh_Hans/tutorial-quickstart-tensorflow.html#the-model "Link to this heading")
Next, we need a model. We defined a simple Convolutional Neural Network (CNN), but feel free to replace it with a more sophisticated model if you'd like:
```
def load_model(learning_rate: float = 0.001):
# Define a simple CNN for CIFAR-10 and set Adam optimizer
model = keras.Sequential(
[
keras.Input(shape=(32, 32, 3)),
layers.Conv2D(32, kernel_size=(3, 3), activation="relu"),
layers.MaxPooling2D(pool_size=(2, 2)),
layers.Conv2D(64, kernel_size=(3, 3), activation="relu"),
layers.MaxPooling2D(pool_size=(2, 2)),
layers.Flatten(),
layers.Dropout(0.5),
layers.Dense(10, activation="softmax"),
]
)
optimizer = keras.optimizers.Adam(learning_rate)
model.compile(
optimizer=optimizer,
loss="sparse_categorical_crossentropy",
metrics=["accuracy"],
)
return model
```
## The ClientApp[¶](https://flower.ai/docs/framework/main/zh_Hans/tutorial-quickstart-tensorflow.html#the-clientapp "Link to this heading")
The main changes we have to make to use Tensorflow with Flower have to do with converting the [`ArrayRecord`](https://flower.ai/docs/framework/main/zh_Hans/ref-api/flwr.app.ArrayRecord.html) received in the [`Message`](https://flower.ai/docs/framework/main/zh_Hans/ref-api/flwr.app.Message.html) into numpy ndarrays for use with the built-in `set_weights()` function. After training, the `get_weights()` function can be used to extract then pack the updated numpy ndarrays into a `Message` from the ClientApp. We can make use of built-in methods in the `ArrayRecord` to make these conversions:
```
@app.train()
def train(msg: Message, context: Context):
# Load the model
model = load_model(context.run_config["learning-rate"])
# Extract the ArrayRecord from Message and convert to numpy ndarrays
model.set_weights(msg.content["arrays"].to_numpy_ndarrays())
# Train the model
...
# Pack the model weights into an ArrayRecord
model_record = ArrayRecord(model.get_weights())
```
The rest of the functionality is directly inspired by the centralized case. The [`ClientApp`](https://flower.ai/docs/framework/main/zh_Hans/ref-api/flwr.clientapp.ClientApp.html) comes with three core methods (`train`, `evaluate`, and `query`) that we can implement for different purposes. For example: `train` to train the received model using the local data; `evaluate` to assess its performance of the received model on a validation set; and `query` to retrieve information about the node executing the `ClientApp`. In this tutorial we will only make use of `train` and `evaluate`.
Let's see how the `train` method can be implemented. It receives as input arguments a [`Message`](https://flower.ai/docs/framework/main/zh_Hans/ref-api/flwr.app.Message.html) from the `ServerApp`. By default it carries:
- an `ArrayRecord` with the arrays of the model to federate. By default they can be retrieved with key `"arrays"` when accessing the message content.
- a `ConfigRecord` with the configuration sent from the `ServerApp`. By default it can be retrieved with key `"config"` when accessing the message content.
The `train` method also receives the `Context`, giving access to configs for your run and node. The run config hyperparameters are defined in the `pyproject.toml` of your Flower App. The node config can only be set when running Flower with the Deployment Runtime and is not directly configurable during simulations.
```
# Flower ClientApp
app = ClientApp()
@app.train()
def train(msg: Message, context: Context):
"""Train the model on local data."""
# Reset local Tensorflow state
keras.backend.clear_session()
# Load the data
partition_id = context.node_config["partition-id"]
num_partitions = context.node_config["num-partitions"]
x_train, y_train, _, _ = load_data(partition_id, num_partitions)
# Load the model
model = load_model(context.run_config["learning-rate"])
model.set_weights(msg.content["arrays"].to_numpy_ndarrays())
epochs = context.run_config["local-epochs"]
batch_size = context.run_config["batch-size"]
verbose = context.run_config.get("verbose")
# Train the model
history = model.fit(
x_train,
y_train,
epochs=epochs,
batch_size=batch_size,
verbose=verbose,
)
# Get training metrics
train_loss = history.history["loss"][-1] if "loss" in history.history else None
train_acc = (
history.history["accuracy"][-1] if "accuracy" in history.history else None
)
# Pack and send the model weights and metrics as a message
model_record = ArrayRecord(model.get_weights())
metrics = {"num-examples": len(x_train)}
if train_loss is not None:
metrics["train_loss"] = train_loss
if train_acc is not None:
metrics["train_acc"] = train_acc
content = RecordDict({"arrays": model_record, "metrics": MetricRecord(metrics)})
return Message(content=content, reply_to=msg)
```
The `@app.evaluate()` method would be near identical with two exceptions: (1) the model is not locally trained, instead it is used to evaluate its performance on the locally held-out validation set; (2) including the model in the reply Message is no longer needed because it is not locally modified.
## The ServerApp[¶](https://flower.ai/docs/framework/main/zh_Hans/tutorial-quickstart-tensorflow.html#the-serverapp "Link to this heading")
To construct a [`ServerApp`](https://flower.ai/docs/framework/main/zh_Hans/ref-api/flwr.serverapp.ServerApp.html) we define its `@app.main()` method. This method receive as input arguments:
- a `Grid` object that will be used to interface with the nodes running the `ClientApp` to involve them in a round of train/evaluate/query or other.
- a `Context` object that provides access to the run configuration.
In this example we use the [`FedAvg`](https://flower.ai/docs/framework/main/zh_Hans/ref-api/flwr.serverapp.strategy.FedAvg.html) and configure it with a specific value of `fraction_train` which is read from the run config. You can find the default value defined in the `pyproject.toml`. Then, the execution of the strategy is launched when invoking its [`start`](https://flower.ai/docs/framework/main/zh_Hans/ref-api/flwr.serverapp.Strategy.html#flwr.serverapp.Strategy.start) method. To it we pass:
- the `Grid` object.
- an `ArrayRecord` carrying a randomly initialized model that will serve as the global model to federated.
- the `num_rounds` parameter specifying how many rounds of `FedAvg` to perform.
```
# Create the ServerApp
app = ServerApp()
@app.main()
def main(grid: Grid, context: Context) -> None:
"""Main entry point for the ServerApp."""
# Load config
num_rounds = context.run_config["num-server-rounds"]
fraction_train = context.run_config["fraction-train"]
# Load initial model
model = load_model()
arrays = ArrayRecord(model.get_weights())
# Define and start FedAvg strategy
strategy = FedAvg(
fraction_train=fraction_train,
)
result = strategy.start(
grid=grid,
initial_arrays=arrays,
num_rounds=num_rounds,
)
# Save the final model
ndarrays = result.arrays.to_numpy_ndarrays()
final_model_name = "final_model.keras"
print(f"Saving final model to disk as {final_model_name}...")
model.set_weights(ndarrays)
model.save(final_model_name)
```
Note the `start` method of the strategy returns a result object. This object contains all the relevant information about the FL process, including the final model weights as an `ArrayRecord`, and federated training and evaluation metrics as `MetricRecords`. You can easily log the metrics using Python's [pprint](https://docs.python.org/3/library/pprint.html) and save the final model weights using Tensorflow's `save()` function.
Congratulations! You've successfully built and run your first federated learning system.
Tip
Check the [运行模拟](https://flower.ai/docs/framework/main/zh_Hans/how-to-run-simulations.html) documentation to learn more about how to configure and run Flower simulations. |
| Shard | 163 (laksa) |
| Root Hash | 4426631000825432763 |
| Unparsed URL | ai,flower!/docs/framework/main/zh_Hans/tutorial-quickstart-tensorflow.html s443 |