Keras Flowsheet Optimization¶

Autothermal Reformer Flowsheet Optimization with OMLT (TensorFlow Keras) Surrogate Object¶
1. Introduction¶
This example demonstrates autothermal reformer optimization leveraging the OMLT package utilizing TensorFlow Keras neural networks. In this notebook, sampled simulation data will be used to train and validate a surrogate model. IDAES surrogate plotting tools will be utilized to visualize the surrogates on training and validation data. Once validated, integration of the surrogate into an IDAES flowsheet will be demonstrated.
2. Problem Statement¶
Within the context of a larger NGFC system, the autothermal reformer generates syngas from air, steam and natural gas for use in a solid-oxide fuel cell (SOFC).
2.1. Main Inputs:¶
- Bypass fraction (dimensionless) - split fraction of natural gas to bypass AR unit and feed directly to the power island
- NG-Steam Ratio (dimensionless) - proportion of natural relative to steam fed into AR unit operation
2.2. Main Outputs:¶
- Steam flowrate (kg/s) - inlet steam fed to AR unit
- Reformer duty (kW) - required energy input to AR unit
- Composition (dimensionless) - outlet mole fractions of components (Ar, C2H6, C3H8, C4H10, CH4, CO, CO2, H2, H2O, N2, O2)
from IPython.display import Image
Image("AR_PFD.png")
3. Training and Validating Surrogates¶
First, let's import the required Python, Pyomo and IDAES modules:
# Import statements
import os
import numpy as np
import pandas as pd
import random as rn
import tensorflow as tf
# Import Pyomo libraries
from pyomo.environ import ConcreteModel, SolverFactory, value, Var, \
Constraint, Set, Objective, maximize
from pyomo.common.timing import TicTocTimer
# Import IDAES libraries
from idaes.core.surrogate.sampling.data_utils import split_training_validation
from idaes.core.surrogate.sampling.scaling import OffsetScaler
from idaes.core.surrogate.keras_surrogate import KerasSurrogate, save_keras_json_hd5, load_keras_json_hd5
from idaes.core.surrogate.plotting.sm_plotter import surrogate_scatter2D, surrogate_parity, surrogate_residual
from idaes.core.surrogate.surrogate_block import SurrogateBlock
from idaes.core import FlowsheetBlock
# fix environment variables to ensure consist neural network training
os.environ['PYTHONHASHSEED'] = '0'
os.environ['CUDA_VISIBLE_DEVICES'] = ''
np.random.seed(46)
rn.seed(1342)
tf.random.set_seed(62)
2023-03-04 01:45:28.716911: I tensorflow/core/platform/cpu_feature_guard.cc:193] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN) to use the following CPU instructions in performance-critical operations: AVX2 AVX512F AVX512_VNNI FMA To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags. 2023-03-04 01:45:28.833357: I tensorflow/core/util/util.cc:169] oneDNN custom operations are on. You may see slightly different numerical results due to floating-point round-off errors from different computation orders. To turn them off, set the environment variable `TF_ENABLE_ONEDNN_OPTS=0`. 2023-03-04 01:45:28.838425: W tensorflow/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library 'libcudart.so.11.0'; dlerror: libcudart.so.11.0: cannot open shared object file: No such file or directory 2023-03-04 01:45:28.838437: I tensorflow/stream_executor/cuda/cudart_stub.cc:29] Ignore above cudart dlerror if you do not have a GPU set up on your machine. 2023-03-04 01:45:28.861571: E tensorflow/stream_executor/cuda/cuda_blas.cc:2981] Unable to register cuBLAS factory: Attempting to register factory for plugin cuBLAS when one has already been registered 2023-03-04 01:45:29.401411: W tensorflow/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library 'libnvinfer.so.7'; dlerror: libnvinfer.so.7: cannot open shared object file: No such file or directory 2023-03-04 01:45:29.401465: W tensorflow/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library 'libnvinfer_plugin.so.7'; dlerror: libnvinfer_plugin.so.7: cannot open shared object file: No such file or directory 2023-03-04 01:45:29.401470: W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:38] TF-TRT Warning: Cannot dlopen some TensorRT libraries. If you would like to use Nvidia GPU with TensorRT, please make sure the missing libraries mentioned above are installed properly.
3.1 Importing Training and Validation Datasets¶
In this section, we read the dataset from the CSV file located in this directory. 2800 data points were simulated from a rigorous IDAES NGFC flowsheet using a grid sampling method. For simplicity and to reduce training runtime, this example randomly selects 100 data points to use for training/validation. The data is separated using an 80/20 split into training and validation data using the IDAES split_training_validation()
method.
# Import Auto-reformer training data
np.set_printoptions(precision=6, suppress=True)
csv_data = pd.read_csv(r'reformer-data.csv') # 2800 data points
data = csv_data.sample(n = 100) # randomly sample points for training/validation
input_data = data.iloc[:, :2]
output_data = data.iloc[:, 2:]
# Define labels, and split training and validation data
input_labels = input_data.columns
output_labels = output_data.columns
n_data = data[input_labels[0]].size
data_training, data_validation = split_training_validation(data, 0.8, seed=n_data) # seed=100
3.2 Training Surrogates with TensorFlow Keras¶
TensorFlow Keras provides an interface to pass regression settings, build neural networks and train surrogate models. Keras enables the usage of two API formats: Sequential and Functional. While the Functional API offers more versatility including multiple input and output layers in a single neural network, the Sequential API is more stable and user-friendly. Further, the Sequnetial API integrates cleanly with existing IDAES surrogate tools and will be utilized in this example.
In the code below, we build the neural network structure based on our training data structure and desired regression settings. Offline, neural network models were trained for the list of settings below and the options bolded and italicized were determined to have the minimum mean squared error for the dataset:
- Activation function: relu, sigmoid, *tanh*
- Optimizer: *Adam*, RMSprop, SGD
- Number of hidden layers: 1, *2*, 4
- Number of neurons per layer: 10, 20, *40*
Typically, Sequential Keras models are built vertically; the dataset is scaled and normalized, and the network is defined for the input layer, hidden layers and output layer for the passed activation functions and network/layer sizes. Then, the model is compiled using the passed optimizer and trained using a desired number of epochs. Keras internally validates while training and updates the model weight (coefficient) values on each epoch.
Finally, after training the model we save the results and model expressions to a folder which contains a serialized JSON file. Serializing the model in this fashion enables importing a previously trained set of surrogate models into external flowsheets. This feature will be used later.
# capture long output (not required to use surrogate API)
from io import StringIO
import sys
stream = StringIO()
oldstdout = sys.stdout
sys.stdout = stream
# selected settings for regression (best fit from options above)
activation, optimizer, n_hidden_layers, n_nodes_per_layer = 'tanh', 'Adam', 2, 40
loss, metrics = 'mse', ['mae', 'mse']
# Create data objects for training using scalar normalization
n_inputs = len(input_labels)
n_outputs = len(output_labels)
x = input_data
y = output_data
input_scaler = None
output_scaler = None
input_scaler = OffsetScaler.create_normalizing_scaler(x)
output_scaler = OffsetScaler.create_normalizing_scaler(y)
x = input_scaler.scale(x)
y = output_scaler.scale(y)
x = x.to_numpy()
y = y.to_numpy()
# Create Keras Sequential object and build neural network
model = tf.keras.Sequential()
model.add(tf.keras.layers.Dense(units=n_nodes_per_layer, input_dim=n_inputs, activation=activation))
for i in range(1, n_hidden_layers):
model.add(tf.keras.layers.Dense(units=n_nodes_per_layer, activation=activation))
model.add(tf.keras.layers.Dense(units=n_outputs))
# Train surrogate (calls optimizer on neural network and solves for weights)
model.compile(loss=loss, optimizer=optimizer, metrics=metrics)
mcp_save = tf.keras.callbacks.ModelCheckpoint('.mdl_wts.hdf5', save_best_only=True, monitor='val_loss', mode='min')
history = model.fit(x=x, y=y, validation_split=0.2, verbose=1, epochs=1000, callbacks=[mcp_save])
# save model to JSON and create callable surrogate object
xmin, xmax = [0.1, 0.8], [0.8, 1.2]
input_bounds = {input_labels[i]: (xmin[i], xmax[i])
for i in range(len(input_labels))}
keras_surrogate = KerasSurrogate(model, input_labels=list(input_labels), output_labels=list(output_labels),
input_bounds=input_bounds, input_scaler=input_scaler, output_scaler=output_scaler)
keras_surrogate.save_to_folder('keras_surrogate')
# revert back to normal output capture
sys.stdout = oldstdout
# display first 50 lines and last 50 lines of output
celloutput = stream.getvalue().split('\n')
for line in celloutput[:50]:
print(line)
print('.')
print('.')
print('.')
for line in celloutput[-50:]:
print(line)
2023-03-04 01:45:31.667489: W tensorflow/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library 'libcuda.so.1'; dlerror: libcuda.so.1: cannot open shared object file: No such file or directory; LD_LIBRARY_PATH: :/home/runner/.idaes/bin 2023-03-04 01:45:31.667513: W tensorflow/stream_executor/cuda/cuda_driver.cc:263] failed call to cuInit: UNKNOWN ERROR (303) 2023-03-04 01:45:31.667528: I tensorflow/stream_executor/cuda/cuda_diagnostics.cc:156] kernel driver does not appear to be running on this host (c046ea564732): /proc/driver/nvidia/version does not exist 2023-03-04 01:45:31.667743: I tensorflow/core/platform/cpu_feature_guard.cc:193] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN) to use the following CPU instructions in performance-critical operations: AVX2 AVX512F AVX512_VNNI FMA To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags.
INFO:tensorflow:Assets written to: keras_surrogate/assets Epoch 1/1000 3/3 [==============================] - 0s 76ms/step - loss: 0.3703 - mae: 0.5194 - mse: 0.3703 - val_loss: 0.3230 - val_mae: 0.4945 - val_mse: 0.3230 Epoch 2/1000 3/3 [==============================] - 0s 16ms/step - loss: 0.3078 - mae: 0.4684 - mse: 0.3078 - val_loss: 0.2686 - val_mae: 0.4450 - val_mse: 0.2686 Epoch 3/1000 3/3 [==============================] - 0s 16ms/step - loss: 0.2556 - mae: 0.4217 - mse: 0.2556 - val_loss: 0.2235 - val_mae: 0.3991 - val_mse: 0.2235 Epoch 4/1000 3/3 [==============================] - 0s 16ms/step - loss: 0.2136 - mae: 0.3798 - mse: 0.2136 - val_loss: 0.1862 - val_mae: 0.3568 - val_mse: 0.1862 Epoch 5/1000 3/3 [==============================] - 0s 16ms/step - loss: 0.1792 - mae: 0.3424 - mse: 0.1792 - val_loss: 0.1557 - val_mae: 0.3193 - val_mse: 0.1557 Epoch 6/1000 3/3 [==============================] - 0s 16ms/step - loss: 0.1512 - mae: 0.3100 - mse: 0.1512 - val_loss: 0.1303 - val_mae: 0.2857 - val_mse: 0.1303 Epoch 7/1000 3/3 [==============================] - 0s 16ms/step - loss: 0.1286 - mae: 0.2829 - mse: 0.1286 - val_loss: 0.1099 - val_mae: 0.2583 - val_mse: 0.1099 Epoch 8/1000 3/3 [==============================] - 0s 16ms/step - loss: 0.1108 - mae: 0.2615 - mse: 0.1108 - val_loss: 0.0935 - val_mae: 0.2381 - val_mse: 0.0935 Epoch 9/1000 3/3 [==============================] - 0s 16ms/step - loss: 0.0969 - mae: 0.2445 - mse: 0.0969 - val_loss: 0.0810 - val_mae: 0.2227 - val_mse: 0.0810 Epoch 10/1000 3/3 [==============================] - 0s 16ms/step - loss: 0.0870 - mae: 0.2324 - mse: 0.0870 - val_loss: 0.0717 - val_mae: 0.2123 - val_mse: 0.0717 Epoch 11/1000 3/3 [==============================] - 0s 16ms/step - loss: 0.0795 - mae: 0.2228 - mse: 0.0795 - val_loss: 0.0650 - val_mae: 0.2041 - val_mse: 0.0650 Epoch 12/1000 3/3 [==============================] - 0s 16ms/step - loss: 0.0745 - mae: 0.2165 - mse: 0.0745 - val_loss: 0.0599 - val_mae: 0.1972 - val_mse: 0.0599 Epoch 13/1000 3/3 [==============================] - 0s 16ms/step - loss: 0.0703 - mae: 0.2108 - mse: 0.0703 - val_loss: 0.0565 - val_mae: 0.1925 - val_mse: 0.0565 Epoch 14/1000 3/3 [==============================] - 0s 16ms/step - loss: 0.0670 - mae: 0.2056 - mse: 0.0670 - val_loss: 0.0534 - val_mae: 0.1879 - val_mse: 0.0534 Epoch 15/1000 3/3 [==============================] - 0s 16ms/step - loss: 0.0640 - mae: 0.2005 - mse: 0.0640 - val_loss: 0.0506 - val_mae: 0.1828 - val_mse: 0.0506 Epoch 16/1000 3/3 [==============================] - 0s 16ms/step - loss: 0.0611 - mae: 0.1949 - mse: 0.0611 - val_loss: 0.0477 - val_mae: 0.1767 - val_mse: 0.0477 Epoch 17/1000 3/3 [==============================] - 0s 15ms/step - loss: 0.0582 - mae: 0.1889 - mse: 0.0582 - val_loss: 0.0454 - val_mae: 0.1711 - val_mse: 0.0454 Epoch 18/1000 3/3 [==============================] - 0s 16ms/step - loss: 0.0557 - mae: 0.1833 - mse: 0.0557 - val_loss: 0.0436 - val_mae: 0.1659 - val_mse: 0.0436 Epoch 19/1000 3/3 [==============================] - 0s 15ms/step - loss: 0.0533 - mae: 0.1778 - mse: 0.0533 - val_loss: 0.0418 - val_mae: 0.1601 - val_mse: 0.0418 Epoch 20/1000 3/3 [==============================] - 0s 16ms/step - loss: 0.0514 - mae: 0.1727 - mse: 0.0514 - val_loss: 0.0403 - val_mae: 0.1546 - val_mse: 0.0403 Epoch 21/1000 3/3 [==============================] - 0s 15ms/step - loss: 0.0496 - mae: 0.1681 - mse: 0.0496 - val_loss: 0.0385 - val_mae: 0.1487 - val_mse: 0.0385 Epoch 22/1000 3/3 [==============================] - 0s 16ms/step - loss: 0.0479 - mae: 0.1630 - mse: 0.0479 - val_loss: 0.0370 - val_mae: 0.1437 - val_mse: 0.0370 Epoch 23/1000 3/3 [==============================] - 0s 15ms/step - loss: 0.0464 - mae: 0.1589 - mse: 0.0464 - val_loss: 0.0359 - val_mae: 0.1404 - val_mse: 0.0359 Epoch 24/1000 3/3 [==============================] - 0s 16ms/step - loss: 0.0449 - mae: 0.1550 - mse: 0.0449 - val_loss: 0.0347 - val_mae: 0.1372 - val_mse: 0.0347 Epoch 25/1000 3/3 [==============================] - 0s 15ms/step - loss: 0.0433 - mae: 0.1511 - mse: 0.0433 - val_loss: 0.0335 - val_mae: 0.1347 - val_mse: 0.0335 . . . 3/3 [==============================] - 0s 11ms/step - loss: 1.3688e-04 - mae: 0.0084 - mse: 1.3688e-04 - val_loss: 9.2093e-05 - val_mae: 0.0068 - val_mse: 9.2093e-05 Epoch 977/1000 3/3 [==============================] - 0s 15ms/step - loss: 1.3427e-04 - mae: 0.0085 - mse: 1.3427e-04 - val_loss: 8.4303e-05 - val_mae: 0.0067 - val_mse: 8.4303e-05 Epoch 978/1000 3/3 [==============================] - 0s 11ms/step - loss: 1.3646e-04 - mae: 0.0087 - mse: 1.3646e-04 - val_loss: 9.1711e-05 - val_mae: 0.0071 - val_mse: 9.1711e-05 Epoch 979/1000 3/3 [==============================] - 0s 15ms/step - loss: 1.3732e-04 - mae: 0.0087 - mse: 1.3732e-04 - val_loss: 8.2200e-05 - val_mae: 0.0067 - val_mse: 8.2200e-05 Epoch 980/1000 3/3 [==============================] - 0s 11ms/step - loss: 1.2526e-04 - mae: 0.0082 - mse: 1.2526e-04 - val_loss: 1.0392e-04 - val_mae: 0.0074 - val_mse: 1.0392e-04 Epoch 981/1000 3/3 [==============================] - 0s 15ms/step - loss: 1.3412e-04 - mae: 0.0082 - mse: 1.3412e-04 - val_loss: 8.1916e-05 - val_mae: 0.0065 - val_mse: 8.1916e-05 Epoch 982/1000 3/3 [==============================] - 0s 16ms/step - loss: 1.3123e-04 - mae: 0.0085 - mse: 1.3123e-04 - val_loss: 8.0986e-05 - val_mae: 0.0067 - val_mse: 8.0986e-05 Epoch 983/1000 3/3 [==============================] - 0s 11ms/step - loss: 1.2668e-04 - mae: 0.0084 - mse: 1.2668e-04 - val_loss: 8.8134e-05 - val_mae: 0.0069 - val_mse: 8.8134e-05 Epoch 984/1000 3/3 [==============================] - 0s 11ms/step - loss: 1.2770e-04 - mae: 0.0082 - mse: 1.2770e-04 - val_loss: 8.4017e-05 - val_mae: 0.0066 - val_mse: 8.4017e-05 Epoch 985/1000 3/3 [==============================] - 0s 15ms/step - loss: 1.2767e-04 - mae: 0.0081 - mse: 1.2767e-04 - val_loss: 7.7591e-05 - val_mae: 0.0064 - val_mse: 7.7591e-05 Epoch 986/1000 3/3 [==============================] - 0s 11ms/step - loss: 1.2811e-04 - mae: 0.0085 - mse: 1.2811e-04 - val_loss: 7.8276e-05 - val_mae: 0.0066 - val_mse: 7.8276e-05 Epoch 987/1000 3/3 [==============================] - 0s 11ms/step - loss: 1.2328e-04 - mae: 0.0082 - mse: 1.2328e-04 - val_loss: 9.2079e-05 - val_mae: 0.0071 - val_mse: 9.2079e-05 Epoch 988/1000 3/3 [==============================] - 0s 16ms/step - loss: 1.2592e-04 - mae: 0.0081 - mse: 1.2592e-04 - val_loss: 7.5439e-05 - val_mae: 0.0064 - val_mse: 7.5439e-05 Epoch 989/1000 3/3 [==============================] - 0s 11ms/step - loss: 1.2697e-04 - mae: 0.0086 - mse: 1.2697e-04 - val_loss: 8.1590e-05 - val_mae: 0.0068 - val_mse: 8.1590e-05 Epoch 990/1000 3/3 [==============================] - 0s 11ms/step - loss: 1.2365e-04 - mae: 0.0082 - mse: 1.2365e-04 - val_loss: 8.8449e-05 - val_mae: 0.0067 - val_mse: 8.8449e-05 Epoch 991/1000 3/3 [==============================] - 0s 11ms/step - loss: 1.2486e-04 - mae: 0.0079 - mse: 1.2486e-04 - val_loss: 8.3745e-05 - val_mae: 0.0065 - val_mse: 8.3745e-05 Epoch 992/1000 3/3 [==============================] - 0s 11ms/step - loss: 1.2131e-04 - mae: 0.0082 - mse: 1.2131e-04 - val_loss: 7.8311e-05 - val_mae: 0.0066 - val_mse: 7.8311e-05 Epoch 993/1000 3/3 [==============================] - 0s 11ms/step - loss: 1.2277e-04 - mae: 0.0083 - mse: 1.2277e-04 - val_loss: 7.5934e-05 - val_mae: 0.0064 - val_mse: 7.5934e-05 Epoch 994/1000 3/3 [==============================] - 0s 11ms/step - loss: 1.1942e-04 - mae: 0.0082 - mse: 1.1942e-04 - val_loss: 7.9994e-05 - val_mae: 0.0067 - val_mse: 7.9994e-05 Epoch 995/1000 3/3 [==============================] - 0s 12ms/step - loss: 1.1739e-04 - mae: 0.0079 - mse: 1.1739e-04 - val_loss: 8.5978e-05 - val_mae: 0.0066 - val_mse: 8.5978e-05 Epoch 996/1000 3/3 [==============================] - 0s 11ms/step - loss: 1.1918e-04 - mae: 0.0078 - mse: 1.1918e-04 - val_loss: 7.7404e-05 - val_mae: 0.0062 - val_mse: 7.7404e-05 Epoch 997/1000 3/3 [==============================] - 0s 11ms/step - loss: 1.1708e-04 - mae: 0.0080 - mse: 1.1708e-04 - val_loss: 7.7886e-05 - val_mae: 0.0064 - val_mse: 7.7886e-05 Epoch 998/1000 3/3 [==============================] - 0s 11ms/step - loss: 1.1993e-04 - mae: 0.0079 - mse: 1.1993e-04 - val_loss: 7.7599e-05 - val_mae: 0.0065 - val_mse: 7.7599e-05 Epoch 999/1000 3/3 [==============================] - 0s 16ms/step - loss: 1.1611e-04 - mae: 0.0081 - mse: 1.1611e-04 - val_loss: 7.2370e-05 - val_mae: 0.0064 - val_mse: 7.2370e-05 Epoch 1000/1000 3/3 [==============================] - 0s 11ms/step - loss: 1.1746e-04 - mae: 0.0083 - mse: 1.1746e-04 - val_loss: 8.1322e-05 - val_mae: 0.0068 - val_mse: 8.1322e-05
3.3 Visualizing surrogates¶
Now that the surrogate models have been trained, the models can be visualized through scatter, parity and residual plots to confirm their validity in the chosen domain. The training data will be visualized first to confirm the surrogates are fit the data, and then the validation data will be visualized to confirm the surrogates accurately predict new output values.
# visualize with IDAES surrogate plotting tools
surrogate_scatter2D(keras_surrogate, data_training, filename='keras_train_scatter2D.pdf')
surrogate_parity(keras_surrogate, data_training, filename='keras_train_parity.pdf')
surrogate_residual(keras_surrogate, data_training, filename='keras_train_residual.pdf')
3/3 [==============================] - 0s 1ms/step