Table 4.

CNN architecture.

IDTypeSizenActivationParam count
(1)(2)(3)(4)(5)(6)
1Input64 |$\times$| 64
2Convolutional11 |$\times$| 1164ReLU23 296
3Max pooling2 |$\times$| 2
4Convolutional7 |$\times$| 7128ReLU401 536
5Dropout (0.2)
6Convolutional5 |$\times$| 5128ReLU409 728
7Max pooling2 |$\times$| 2
8Convolutional5 |$\times$| 5256ReLU819 456
9Dropout (0.2)
10Convolutional3 |$\times$| 3256ReLU590 080
11Max pooling2 |$\times$| 2
12Fully connected1024ReLU2360 320
13Dropout (0.2)
14fully connected1024ReLU1049 600
15Dropout (0.2)
16Fully connected512ReLU524 800
17Dropout (0.2)
18Fully connected512ReLU262 656
19Fully connected1Sigmoid513
Total params6441 985
Trainable params6441 985
Nontrainable params0
IDTypeSizenActivationParam count
(1)(2)(3)(4)(5)(6)
1Input64 |$\times$| 64
2Convolutional11 |$\times$| 1164ReLU23 296
3Max pooling2 |$\times$| 2
4Convolutional7 |$\times$| 7128ReLU401 536
5Dropout (0.2)
6Convolutional5 |$\times$| 5128ReLU409 728
7Max pooling2 |$\times$| 2
8Convolutional5 |$\times$| 5256ReLU819 456
9Dropout (0.2)
10Convolutional3 |$\times$| 3256ReLU590 080
11Max pooling2 |$\times$| 2
12Fully connected1024ReLU2360 320
13Dropout (0.2)
14fully connected1024ReLU1049 600
15Dropout (0.2)
16Fully connected512ReLU524 800
17Dropout (0.2)
18Fully connected512ReLU262 656
19Fully connected1Sigmoid513
Total params6441 985
Trainable params6441 985
Nontrainable params0

Notes. (1) ID of the layers. (2) Type of the layers. (3) Size of the data or the filters. (4) Number of the filters. (5) Activation function adopted in the layers. (6) Trainable parameters.

Table 4.

CNN architecture.

IDTypeSizenActivationParam count
(1)(2)(3)(4)(5)(6)
1Input64 |$\times$| 64
2Convolutional11 |$\times$| 1164ReLU23 296
3Max pooling2 |$\times$| 2
4Convolutional7 |$\times$| 7128ReLU401 536
5Dropout (0.2)
6Convolutional5 |$\times$| 5128ReLU409 728
7Max pooling2 |$\times$| 2
8Convolutional5 |$\times$| 5256ReLU819 456
9Dropout (0.2)
10Convolutional3 |$\times$| 3256ReLU590 080
11Max pooling2 |$\times$| 2
12Fully connected1024ReLU2360 320
13Dropout (0.2)
14fully connected1024ReLU1049 600
15Dropout (0.2)
16Fully connected512ReLU524 800
17Dropout (0.2)
18Fully connected512ReLU262 656
19Fully connected1Sigmoid513
Total params6441 985
Trainable params6441 985
Nontrainable params0
IDTypeSizenActivationParam count
(1)(2)(3)(4)(5)(6)
1Input64 |$\times$| 64
2Convolutional11 |$\times$| 1164ReLU23 296
3Max pooling2 |$\times$| 2
4Convolutional7 |$\times$| 7128ReLU401 536
5Dropout (0.2)
6Convolutional5 |$\times$| 5128ReLU409 728
7Max pooling2 |$\times$| 2
8Convolutional5 |$\times$| 5256ReLU819 456
9Dropout (0.2)
10Convolutional3 |$\times$| 3256ReLU590 080
11Max pooling2 |$\times$| 2
12Fully connected1024ReLU2360 320
13Dropout (0.2)
14fully connected1024ReLU1049 600
15Dropout (0.2)
16Fully connected512ReLU524 800
17Dropout (0.2)
18Fully connected512ReLU262 656
19Fully connected1Sigmoid513
Total params6441 985
Trainable params6441 985
Nontrainable params0

Notes. (1) ID of the layers. (2) Type of the layers. (3) Size of the data or the filters. (4) Number of the filters. (5) Activation function adopted in the layers. (6) Trainable parameters.

Close
This Feature Is Available To Subscribers Only

Sign In or Create an Account

Close

This PDF is available to Subscribers Only

View Article Abstract & Purchase Options

For full access to this pdf, sign in to an existing account, or purchase an annual subscription.

Close