2019 4th International Conference on Information Systems Engineering (ICISE)
Protecting the Intellectual Properties of Digital Watermark Using Deep Neural
Network
Farah Deeba Getenet Tefera
School of Information and Software Engineering, School of Information and Software Engineering,
University of Electronic Science and University of Electronic Science and
Technology of China Technology of China
Shahe Campus: No.4, Section 2 Shahe Campus: No.4, Section 2,
North Jianshe Road, 610054 North Jianshe Road, 610054
e-mail:
[email protected] e-mail:
[email protected] She Kun Hira Memon
School of Information and Software Engineering, Department of Computer Engineering,
University of Electronic Science and Quaid E Awam University of Engineering, Science and
Technology of China Technology Nawabshah, Sindh, Pakistan
ShaheCampus: No.4, Section 2, e-mail:
[email protected] North Jianshe Road, 610054
e-mail:
[email protected] Abstract— Recently in the vast advancement of Artificial and edges are joined together[1-2]. In DNN we solve the
Intelligence, Machine learning and Deep Neural Network (DNN) complex set of calculations by using input, convolution,
driven us to the robust applications. Such as Image processing, pooling and out layers[3]. Deep Learning wrapper
speech recognition, and natural language processing, DNN facilitates the design and training of models over many
Algorithms has succeeded in many drawbacks; especially the frameworks such as Theano , Tensor Flow, Torch, Chainer,
trained DNN models have made easy to the researchers to and Keras [4-8].These frameworks reduce the efforts for
produces state-of-art results. However, sharing these trained engineer and researcher in deep learning But still the
models are always a challenging task, i.e. security, and protection. training the deep model is not easy so itacquires the
We performed extensive experiments to present some analysis of
massive amount of data and time. DNN found in multiple
watermark in DNN. We proposed a DNN model for Digital
applications, GoogleNet, ResNet, VGGNet, AlexNet and
watermarking which investigate the intellectual property of Deep
Neural Network, Embedding watermarks, and owner verification. LeNet[9-13].This paper focused on the classification of the
This model can generate the watermarks to deal with possible watermark image using DNN.
attacks (fine tuning and train to embed). This approach is tested Few models take several days to train the deep models
on the standard dataset. Hence this model is robust to above such as ResNet and VGGNet [10-11]. So, therefore, it is
counter-watermark attacks. Our model accurately and instantly difficult to train the model for this problems nowadays
verifies the ownership of all the remotely expanded deep learning GPUs are commonly used to save the training and
models without aff ffecting the model accuracy for standard computation times. Also, some training models are online
information data. available on the website which offers comfortable help to
try these models directly. Hence sharing models is always
Keywords-watermark; embedded; ownership verification;deep an essential part in research progress and future
neural network. development of deep neural networking (DNN). Other
digital platforms offer trained models similar to google
I. INTRODUCTION app store or even artificial intelligence models available,
Deep learning and machine learning methods have recently but security is required to protect these models.
become a hot topic in many computers vision tasks, such as In our work, we employed DNN models for digital
objects and images recognition in still images. While watermarking Technology. Watermarking scheme for deep
successful techniques have been manifested with image neural networks is proposed which is the black box in
understanding, video content still presents additional terms of verification. Watermarking is applied to
challenges e.g., motion, temporal consistency, spatial location recognize the possession of the copyright of digital
that usually cannot be bridged with still image recognition contents such as audios, images, and videos.
solutions. In the structure of DNN neurons are interconnected,
2160-1291/19/$31.00 ©2019 IEEE 91
DOI 10.1109/ICISE.2019.00025
We perfumed the experiment watermark embedding
through DNN, In the watermarking process, training the neural
network to be as accurate as possible concerning these
randomly chosen layers and neural network formalize the idea
of backdooring aneural network with specificproperties. People
prefer neural networks now because parameters are not so
important as such as the performance. Hence, it is necessary to
improve the host trained network performance. There are some
main properties which we want to achieve with our
watermarking scheme the first one is it supposed to be
functionality preserving there's no point of watermarking.
Functionality preserving so does a watermark harm the task
that you want to achieve it turns out so we ran experiments
training without the watermark and with the watermark and it
turns out that the accuracy of the neural network is almost the
FIGURE 1. WATERMARKING LIFE CYCLE
same like it sometimes even increases a bit.
From Fig. 1 we can recognize the complete typical
II. DNN WATERMARKING watermarking process. An embedding algorithm "E"
Digital watermarking is the process to protect the hidden embeds the pre-trained watermarks W into the carrier data
information into the digital media to preserve the ownership of C that should be protected from attacks. When embedding
those media data. Many approaches have been introduced to is done the data stored in(e
obtain the watermark to be active as well as robust to removal =E(W,C))where decryption is appliedto extract the
of different attacks. Based on extraction methods, digital image watermark"W"frome
watermarking approaches can be split into two main categories
named Blind- watermarking and Non-Blind watermarking [14].
Traditional algorithms for digital watermarking can be
classified into twomethods transform domain also well known
as frequency domain [15] and spatial domain [16]. Blind
watermarking only needs watermarked images for the
extraction process; however in non-blind watermark approach
watermarked images, original images, and extra information is
also required to extract information. Spatial domain digital
watermarking algorithms have been proposed [16-18].
Uchida et al. [19] previously introduced the first approach
for embedding watermarks into deep neural network through
which he embedded the data into the weights of DNN. Hence,
it believes that the stolen prototypes can be nearby accessible
to obtain the group of parameters, which is not possible since FIGURE2. WORKflOW OF DNN WATERMARKING
most deep learning models are used as online assistance and it
will be troublesome to immediately procure access to model The above Fig. 2 shows the DNN watermark
parameters, particularly for the stolen prototypes.As DNN Framework.
models are broadly deployed and become more critical, they When generating Watermarks these watermark shown
are frequently targeted by enemies. Enemies can steal the as the fingerprint in ownership as well. It first creates the
model and build a copy of AI service. watermark; these watermarks reported as the fingerprint
As DNN can learn the patterns automatically once the verification. Then these watermarks embed to DNNs
parameters are fixed; hence we implemented DNN in through training. The DNN automatically learns the
watermarking one is to protect the Intellectual properties and patterns of watermarks and retains them. And finally for
second is to verify the ownership with embedded watermarking. verification after embedding ourmodels is strong enough
The goal of the framework is to preserve the intellectual to verify the ownership.Following newly generated models
characteristics of the deep neural networks through verifying able to verify ownership so that the ownercan prove by
possession of remote DNN services with embedded simple sending watermark as input and checking through
watermarks. In our model first, we defined the labels for output by deploying DNN model. Rest our paper contains
different watermarks and also trained these labels to the deep Watermark Generations, types of attacks (fine tuning and
neural network.In the model without the watermark by using train to embed) and owner verification and finally we
less computation a lot less estimate than training from scratch. demonstrated the Experiment Setup.
It can never defend against an enemy who makes a model
related to cryptography where we assume that there is a gap in III. EMBEDDING AND WATERMARKS GENERATION
computational power between the attacker and the defender. In embedding retraining to input distribution which is
entirely unrelated to the task on the training data using the
92
labels in the neural network as accurate on the strain data as B. Train-to-embed:
possible without over-fitting. At the same time these random Train to embed means the training from scratch and
inputs which chose independently of the input distribution. embedding the watermark where labels data training
Accept arbitrary labels first fix and then during different ways
how to determine the input distribution they use different available of the host network.
security they have different security properties they want to C. Fine-Tuning Toembed:
show for the neural network and therefore have a separate
analysis We applied the watermark algorithm which already Fine-tune the whole network which means to
used by Zhang et al. [14]. annihilate the watermark this was kind of unexpected for
us because at the same time embed the watermark during
A. Algorithm: Watermark Embedding the training it just stays where it’s the yellow bar on the
left side of it.
: In operation, for all dataset, we divided the testing
={ , } dataset Into two parts. In the first part fine-tuning used for
previously trained DNN and while in the second part is
= , ( ≠ ) used for evaluation of extra models. Later we yet practice
testing and watermark accuracy of different new models to
: estimate the robustness of our watermarking framework
for alterations produced by fine-tuning
:
D. Ownership Verification:
: To verify the ownership, we must send the regular
() queries with generated watermarks images that is Dwm, if
the Query matches which means xwm=Ywmwe can validate
←∅ that our model is protected. It is due to the DNN
model[20]. Because without embedding watermarks
← , , cannot understand watermarks so that it can classify
images but will not classify images with embedded
ℎ ∈ watermarks.
= _ { ( ), } E. Meaningful Content (Watermark- Content):
In training images, we add the text to give useful
= content on it; these images taken as input images. For
example, if we insert the "TEST" string in our model to the
= ∪ ,
Lena image. In this example, the query watermark (TEST)
in the Lena image as a predefined prediction (Lena) which
consist of fingerprints for model verification.
= ,
The algorithms of embedding shows that DNN Watermark
embedding performs the original training data (Dtrain) and
transformation keys {Yc,Yd} (s, d) which are inputs and finally
outputs protected the DNN model (Z8, Dwm).With the help of
transform key, we label the watermarks which are defined by
the owner. Here Ydisowner defined labels for the watermark
and Yc, is original label data. FIGURE.3. LENA IMAGES WITH AND WITHOUT MEANINGFUL
CONTENT
Wetook the same amount of training with and without the
watermark so that it probably increases a bit is like apparently IV. DATASET
just outliers. We also validated and experimentally verify that it
is not only able to remove the watermark on one specific attack, We use the following benchmark image dataset for the
evaluation. The architecture and training parametersof DNN
but there are a few more attacks. so we did a fine-tuned either
models for each datasetdigital image watermarking technique
only the last layer or all layers , asfine-tuning is a standard is tested on six well known gray images Lena-image with t h e
thing to do if you have more training data to adapt your neural dimension of 512*512 and standard image-set5 having
network. The following two circumstances noticed that the host different images of different sizes have been analyzed in this
network of the copyright owner is supposed to embed a analysis shown in Fig.4
watermark to the host network in training or fine-tuning.
93
idea of the digital watermark in Deep Neural Network. We
learn different frameworks of watermarking, embed them
into DNN model and finally, we perform owner
verification. Achievement of this DNN model successfully
verified owner verification based on embedded
watermarks. These all performance evaluated on the
Standard Dataset, and we show that our framework can
meet the general watermarking standard; hence our model
is robust to above possible attack.We could not achieve
such accuracy as previously [19]soin future the accuracy
can be improved and also can be tested on different
Stoddard datasets.
ACKNOWLEDGMENTS
FIGURE 4. STANDARD DATASET
This work is supported by the Sichuan Science and
V. EXPERIMENT: Technology Support Program under grant no.
[2018GZDZX0012].
We have done all our experiments by using Python 2.7 with
Spyder Integrated Development Environment(IDE) and Keras REFERENCES
[8] and TensorFlow[5] frameworks using device GforceGTX [1] N. Aloysius and M. Geetha, "A review on deep convolutional
770, CUDNN 5110. We assess the performance of our neural networks," 2017 International Conference on
watermarking framework with the standard dataset with Communication and Signal Processing (ICCSP), Chennai, 2017,
learning ratee of 0.5. We trained our model on multiple epochs. pp. 0588-0592.
Each time itgives almost the same error rate and same accuracy. doi:10.1109/ICCSP.2017.8286426
So we notice that our embeddingDNN model perform well [2] O'Shea, Keiron& Nash, Ryan. “An Introduction to Convolutional
Neural Networks”. ArXive-prints. 015CoRR,abs/1511.08458.
against the types of attacks which means it can embed without
[3] JiquanNgiam, Aditya Khosla, Mingyu Kim, Juhan Nam, Honglak
the impairing the performance of the host network. Lee, and Andrew Y Ng. “Multimodal deep learning”. In
Here table I shows the accuracy our model during Training. Proceedings of the 28th international conference on machine
And Table II shows the Architecture of our DNN model. learning (ICML-11)2011, pages 689–696
[4] Theano Development Team.Theano: “A Python framework for fast
TABLE I. THE ACCURACY OF DNN MODEL computation of mathematical expressions”. 2016.arXiv e- prints,
abs/1605.02688,
watermark-Content Accuracy [5] Martın Abadi, Ashish Agarwal, Paul Barham, Eugene Brevdo,
Zhifeng Chen, Craig Citro, Greg S Corrado, Andy Davis, Jeffrey
Watermarks (trained) 54.7% Dean, Matthieu Devin, et al. “Tensorflow:Large-scale machine
learning on heterogeneous distributed systems”. 2016arXiv
Watermarks (test) 51.89%
preprint arXiv:1603.04467
[6] R. Collobert, K. Kavukcuoglu, and C. Farabet. Torch7: “Matlab-
TABLE II. THE ARCHITECTURE OF DNN MODELS like environment for machine learning”. InBigLearn, Advancesin
Neural Information Processing Systems Workshop, 2011 number
Layer Type Lena-Dataset EPFL-CONF-192376.
[7] S. Tokui, K. Oono, S. Hido, and J. Clayton. Chainer: “next-
Conv.ReLU 32 flters (3 × generation open source framework for deep learning”.In Proc. of
3) NIPS Workshop on Machine Learning Systems,2015.
Conv.ReLU 32 flters (3 × [8] F. Chollet. Keras. GitHub repository,
3) 2015,https://github.com/keras-team/keras
Max Pooling 2×2 [9] C. Szegedy, W. Liu, Y. Jia, P. Sermanet, S. Reed,D. Anguelov, D.
Conv.ReLU 64 flters (3 × Erhan, V. Vanhoucke, and A. Rabinovich.”Going deeper with
3) convolutions”. In Proc. of CVPR,2015.page 1-9
Conv.ReLU 64 flters (3 × [10] K. He, X. Zhang, S. Ren, and J. Sun. “Deep residual learning for
3) image recognition”. In Proc. Of CVPR,2016.arXiv preprint
arXiv:1512.03385
Max Pooling 2×2
[11] K. Simonyan and A. Zisserman. “Very deep convolutional
Dense.ReLU 256 networks for large-scale image recognition”. In Proc.
Dense.ReLU 256 officer,2015.CoRR, abs/1409.1556.
Softmax 10 [12] A. Krizhevsky, I. Sutskever, and G. E. Hinton. “Image
classification with deep convolutional neural networks”. InProc.
Of NIPS,2012.
VI. CONCLUSION AND FUTUREWORK:
[13] Y. Lecun, L. Bottou, Y. Bengio, and P. Haffner. Gradient-based
We generalized the digital watermark using the deep neural learning applied to document recognition. Proceedings of the IEEE,
network, and remotely verify the ownership of DNN models 86(11):2278–2324,1998.
based on the embedded watermarks. Our paper explained the
94
[14] Zhang, Jialong&Gu, Zhongshu& Jang, Jiyong& Wu, Hui& Ph. [18] Yeuan-Kuen Lee, Graeme Bell, Shih-Yu Huang, Ran-Zan Wang,
Stoecklin, Marc & Huang, Heqing& Molloy, Ian. “ Protecting and ShyongJianShyu, “An Advanced Least- Significant-
Intellectual Property of Deep Neural NetworkswithWatermarking”. 2018 BitEmbedding Schemefor Steganographic Encoding”. In
159-172.10.1145/3196494.3196550. Proceedings of the 3rdPacific- RimSymposiumon Image and
[15] Elbas_x0010_ E. “Robust multimedia watermarking. Hidden Markov Video Technology(PSIVT’09) pp:349 – 360, 2009,doi:
model approach for video sequences”. Turk J ElecEng& Comp Sci 2010; 10.1007/978-3-540-92957-4_31
18:159-170. [19] J. Tian,“Reversible Data Embedding Using a Difference
[16] C.Munesh, S.Pandey,2010. “A DWT domain visible watermarking Expansion” , IEEE Transactions on Circuits and Systems for
techniques for digital images”, InInternational Conference On VideoTechnology, VOL. 13, NO. 8, AUGUST 2003, doi:
Electronics and Information Engineering Koyoto (ICEIE’10), 2010, pp. 10.1109/TCSVT.2003.815962.
V2-421-V2-427,doi: 10.1109/ICEIE.2010.5559809 [20] Yusuke Uchida, Yuki Nagai, Shigeyuki Sakazawa, and Shin’ichi
[17] L.Leida, G. LiBao.L , “Localized image watermarking in spatial domain Satoh, “Embedding Watermarks into Deep Neural Networks”. In
resistant to geometric attacks”, International Journal of Electronics and Proceedingsofthe 2017 ACM on International Conference on
Communications vol. 63, no.(2),Feb 2009, pp:123-131, doi: Multimedia Retrieval (ICMR ’17) pp. 269-277, doi:
10.1016/j.aeue.2007.11.007 10.1145/3078971.3078974
95