NEURAL NETWORK-BASED DETECTION OF FRAUDULENT PROFILES IN SOCIAL MEDIA PLATFORMS.pdf
1. A MAJOR PROJECT REPORT
ON
“NEURAL NETWORK-BASED DETECTION OF FRAUDULENT
PROFILES IN SOCIAL MEDIA PLATFORMS”
Submitted to
SRI INDU COLLEGE OF ENGINEERING & TECHNOLOGY, HYDERABAD
In partial fulfillment of the requirements for the award of degree of
BACHELOR OF TECHNOLOGY
In
COMPUTER SCIENCE AND ENGINEERING
Submitted by
P.RAVALI [20D41A05G2]
P.VYSHNAVI [20D41A05G1]
P.MITHUN [20D41A05G6]
S.NITHIN [20D41A05H7]
Under the esteemed guidance of
Mr. K.VIJAY KUMAR
(Assistant Professor)
DEPARTMENT OF COMPUTER SCIENCE AND ENGINEERING
SRI INDU COLLEGE OF ENGINEERING AND TECHNOLOGY
(An Autonomous Institution under UGC, Accredited by NBA, Affiliated to JNTUH)
Sheriguda (V), Ibrahimpatnam (M), Rangareddy Dist – 501 510
(2023-2024)
2. SRI INDU COLLEGE OF ENGINEERING AND TECHNOLOGY
(An Autonomous Institution under UGC, Accredited by NBA, Affiliated to JNTUH)
DEPARTMENT OF COMPUTER SCIENCE & ENGINEERING
CERTIFICATE
Certified that the Major project entitled “NEURAL NETWORK-BASED DETECTION OF
FRAUDULENT PROFILES IN SOCIAL MEDIA PLATFORMS” is a bonafide work
carried out by, P.RAVALI [20D41A05G2], P.VYSHNAVI [20D41A05G1], P.MITHUN
[20D41A05G6] ,S.NITHIN [20D41A05H7] in partial fulfillment for the award of degree of
Bachelor of Technology in Computer Science andEngineering of SICET, Hyderabad for the
academic year 2023-2024.The project has been approved as it satisfies academic requirements
in respect of the work prescribed for IV Year, II-Semester of B. Tech course.
INTERNAL GUIDE HEAD OF THE DEPARTMENT
(Mr. K.VIJAY KUMAR) (Prof .Ch.GVN.Prasad)
(Assistant Professor) EXTERNAL EXAMINER
3. ACKNOWLEDGEMENT
The satisfaction that accompanies the successful completion of the task would be put
incomplete without the mention of the people who made it possible, whose constant guidance
and encouragement crown all the efforts with success. We are thankful to Principal
Dr. G. SURESH for giving us the permission to carry out this project. We are highly indebted
to Prof. Ch.GVN.Prasad, Head of the Department of Computer Science Engineering, for
providing necessary infrastructure andlabs and also valuable guidance at every stageof this
project. We are grateful to our internal project guide Mr.K,VIJAY KUMAR, Assistant
Professor for his constant motivation and guidance given by him during the execution of this
project work. We would like to thank the Teaching & Non-Teaching staff of Department of
ComputerScience and engineering for sharing their knowledge with us, last but not least we
express our sincere thanks to everyone who helped directly or indirectly for the completion of
this project.
P.RAVALI [20D41A05G2]
P.VYSHNAVI [20D41A05G1]
P.MITHUN [20D41A05G6]
S.NITHIN [20D41A05H7]
4. i
ABSTRACT
Social media platforms have become pervasive in modern society, offering opportunities for
individuals to connect, share information, and engage in various activities. However, the rise of
fraudulent activities, such as fake profiles, poses significant challenges to the integrity and
security of these platforms. Traditional methods of detecting fraudulent profiles often rely on
manual inspection or rule-based systems, which can be time-consuming and ineffective in
identifying sophisticated fraudulent behavior.
This study proposes a novel approach using neural networks for the automated detection of
fraudulent profiles in social media platforms. By leveraging the power of deep learning
techniques, the proposed system learns intricate patterns and features from large-scale datasets,
enabling it to effectively distinguish between genuine and fraudulent profiles. The neural
network model is trained on diverse sets of features, including user behavior patterns, content
characteristics, network structure, and temporal dynamics, to capture the complex nature of
fraudulent activities.
Experimental results demonstrate the efficacy of the proposed approach in detecting fraudulent
profiles with high accuracy and efficiency. Compared to traditional methods, the neural network-
based detection system achieves superior performance in terms of precision, recall, and F1-score.
Moreover, the model exhibits robustness against various evasion techniques employed by
fraudsters, making it suitable for real-world deployment in social media platforms.
Social networking sites such as Facebook, Twitter, histogram, etc. are extremely famous among
people. Users always interact with their friends via these social networks sites or media. They
share their personal and public information using these social networks. an immense number of
people use social networking sites due to their attractiveness. This fame causes problems to the
websites due to the creation of fake accounts. The owners of fake accounts pull out personal
information about other people and spread the fake data on social networks. In our proposed
plan, we propose machine learning techniques such as Neural Networks and SVM for detecting
fake accounts on Facebook or Twitter, or Twitter. Different data mining tools have been used for
5. ii
the simulation of the algorithm and the obtained results are presented by the proposed plan. Data
mining tool which allows quick user interaction with a simple tool for the identification of fake
accounts from available data. In this, we classify the data using the above machine learning
techniques, which identify the fake accounts on the social sites.
The results demonstrated that the proposed method could detect fake profiles with an accuracy of
99.4%, equivalent to the achieved findings based on bigger data sets and more extensive profile
information. The results were obtained with the minimum available profile data. In addition, in
comparison with the other methods that use the same amount and kind of data, the proposed deep
neural network gives an increase in accuracy of roughly 14%. The proposed model outperforms
existing methods, achieving high accuracy and F1 score in identifying fake profiles. The
associated findings indicate that the proposed model attained an average accuracy of 99% while
considering two distinct scenarios: one with a single theme and another with a miscellaneous
one. The results demonstrate the potential of DNNs in addressing the challenging problem of
detecting fake profiles, which has significant implications for maintaining the authenticity and
trustworthiness of online social networks.
6. iii
CONTENTS
S.No. Chapters Page No.
i. List of contents .......................................................................................... i
ii. List of Figures ..........................................................................................iii
iii. List of Screenshots .................................................................................. iv
1. INTRODUCTION
1.1 INTRODUCTION TO PROJECT .............................................................................01
1.2 LITERATURE SURVEY .........................................................................................04
1.3 MODULES.............................................................................................................. 07
2. SYSTEM ANALYSIS
2.1 EXISTING SYSTEM & ITS DISADVANTAGES.................................................. 08
2.2 PROPOSED SYSTEM & ITS ADVANTAGES......................................................09
2.3 SYSTEM REQUIREMENTS...................................................................................10
3. SYSTEM STUDY
3.1 FEASIBILITY STUDY ...........................................................................................11
4. SYSTEM DESIGN
4.1 ARCHITECTURE .................................................................................................... 12
4.2 UML DIAGRAMS ................................................................................................... 13
4.2.1 USECASE DIAGRAM. ................................................................................14
4.2.2 CLASS DIAGRAM......................................................................................14
4.2.3 SEQUENCE DIAGRAM............................................................................. 15
4.2.4 COLLABORATION DIAGRAM................................................................ 16
5.TECHNOLOGIES USED
5.1 WHAT IS PYTHON..............................................................................................17
5.1.1 ADVANTAGRS & DISADVANTAGES OF PYTHON .........................18
5.1.2 HISTORY............................................................................................... 21
7. iv
5.2.1 CATEGORIES OF ML ......................................................................... 24
5.2.2 NEED FOR ML...............................................................................….. 25
5.2.3 CHALLENGES IN ML...........................................................................26
5.2.4 APPLICATIONS .................................................................................... 27
5.2.5 HOW TO START LEARNING ML? ......................................................28
5.2.6 ADVANTAGES & DISADVANTAGES OF ML ................................... 29
5.3 PYTHON DEVELOPMENT STEPS ....................................................................... 31
5.4 MODULES USED IN PYTHON ............................................................................. 34
5.5 INSTALL PYTHON STEP BY STEP IN WINDOWS & MAC............................... 36
6. IMPLEMENTATION
6.1 SOFTWARE ENVIRONMENT..............................................................................44
6.1.1 PYTHON............................................................................................. 44
6.1.2 SAMPLE CODE...................................................................................45
7.SYSTEM TESTING
7.1 INTRODUCTION TO TESTING ............................................................................. 50
7.2 TESTING STRATEGIES ..........................................................................................52
8. SCREENSHOTS................................................................................... 53
9. CONCLUSION ..................................................................................... 59
10. REFERENCES........................................................................................60
8. v
LIST OF FIGURES
Fig No Name Page No
Fig.1 Architecture diagram 12
Fig.2 Use case diagram 14
Fig.3 Class diagram 14
Fig.4 Sequence diagram 15
Fig.5 Collaboration diagram 16
Fig.6 Installation of Python 36
9. vi
LIST OF SCREENSHOTS
Fig No Name Page No
Fig.1
To run the project click on ‘run.bat’ file to get below screen. Click on
upload social network profile dataset button and upload dataset
53
Fig.2
Selecting and uploading dataset.txt file and then click on open button
to load dataset
54
Fig.3
Click on preprocess dataset to remove missing values and to split
dataset into train and test part
55
Fig.4 Click on Run ANN algorithm button and we can see final ANN
accuracy
56
Fig.5 Click on ANN accuracy & loss graph button
57
Fig.6 Click on predict fake/genuine profile to upload test data and ANN will
predict result
58
10. 1
1.INTRODUCTION
1.1 Introduction:
In the rapidly evolving landscape of social media, the rise of fraudulent profiles poses a
significant challenge to users, platforms, and cybersecurity experts alike. As the digital
realm becomes increasingly intertwined with our daily lives, the need for robust
mechanisms to identify and mitigate fraudulent activities is more pressing than ever before.
In this context, the application of neural networks emerges as a beacon of hope, offering a
sophisticated and dynamic solution to the complex problem of detecting fraudulent
profiles.
Social media platforms serve as virtual arenas for billions of users worldwide to connect,
share, and interact. However, this interconnectedness also presents a fertile ground for
malicious actors seeking to exploit vulnerabilities for personal gain, be it through identity
theft, financial scams, or the dissemination of misinformation. Traditional methods of fraud
detection often fall short in the face of rapidly evolving tactics employed by fraudsters,
necessitating a paradigm shift towards more advanced and adaptive approaches.
Enter neural networks, a branch of artificial intelligence inspired by the complex
interconnected structure of the human brain. These computational models excel at
recognizing patterns, making them particularly well-suited for tasks such as image
recognition, natural language processing, and, crucially, fraud detection. By leveraging
vast amounts of data and sophisticated algorithms, neural networks can discern subtle
indicators of fraudulent behavior that may elude human observers or conventional
detection methods.
The essence of neural network-based detection lies in its ability to learn and adapt in real-
time, continuously refining its understanding of what constitutes legitimate user behavior
versus suspicious activity. Through a process known as training, neural networks analyze
vast datasets comprising examples of both genuine and fraudulent profiles, extracting
underlying patterns and features that distinguish between the two. This process equips the
neural network with the capability to generalize its learnings and accurately identify .
11. 2
Social networking sites have been extensively used as a medium of communication between people in
day-to-day life. Users using these sites always share their information and daily activities which attract
several people to these sites. The increasing popularity of Facebook or Twitter or Twitter from the year
2006 to 2016 allows the users to add friends and share various kinds of information such as personal,
social, economic, educational, political, business, etc. Moreover, they can also share photos, videos, and
other day-to-day interaction. However, some people don't use these sites with a goodobjective.
Therefore they create fake accounts on social sites. Fake accounts do not have any real identity so we
can call them Attacker. This attacker uses incorrect information or statistics about some real-world
person to create a fake account. Using these fake accounts, attackers spread fake information which
affects other users. To protect such sensitive data of users is one of the major challenges of social sites.
There are several techniques in the field of machine learning that have been developed to detect fake
accounts in social networking sites such as Neural Network (NN), Naive Bayes, Markov Model, and
Bayesian Network. In recent researches, it has been found that these techniques make available
enhanced results to detect fake accounts.
Neural Network consists of many interconnected processing elements. It takes decisions just like a
human brain. Support vector machines (SVM) are supervised machine learning techniques
used for classification. It finds the hyperplane to classify the data. Neural networks and SVM can accept
a large amount of random data and are suitable to detect fake accounts on social networking sites
based on various characteristics of accounts. The naive Bayes classifier is based on Bayes’ theorem.
It predicts the probability that a given variable belongs to a particular class.
There is a number of techniques in the field of machine learning that have been developed to detect
fake accounts in social networking sites such as Neural Network (NN), Naive Bayes, Markov Model
and Bayesian Network. In recent researches, it has been found that these techniques make available
enhanced results to detect fake accounts. Neural Network consists of many interconnected processing
elements. It takes decisions just like a human brain. Support vector machines (SVM) is supervised
machine learning techniques used for classification. It finds the hyper plane to classify the data. Neural
network and SVM are able to accept a large amount of random data and suitable to detect the fake
accounts on social networking sites based on various characteristics of accounts. Naive Bayes
classifier is based on Bayes’ theorem. It predicts the probability that a given variable belongs to
particular class.
12. 3
In recent decades, social media has significantly influenced interpersonal relationships, transforming
the internet into a virtual platform for online development, trade, and exchanging knowledge by
individuals and their organizations. The various social communication systems have value chains
aimed at certain user groups. For example, users may reunite with old acquaintances by browsing
their Facebook profiles, and social media such as Twitter provides relevant updates and news of the
following profiles. On the other hand, there are social network sites with different purposes, like
LinkedIn, which is intended to serve as a support system for professional groups. Therefore, users
are encouraged to fill their profiles with a significant number of personal data and to explore other users
who share the same interests. According to usage rates, Facebook is the most popular social media
platform, with 800 million monthly visits.
Estimates provided by Cloudmark suggest that between 20 and 40 percent of accounts on both
Facebook and Twitter might be fake profiles. It is becoming more challenging to tell a real user from a
fake due to the high levels of user engagement that occur every day and the millions of transactions
that take place each day. The anticipated outcomes from the efforts to elicit user participation in
identifying fake accounts have not been attained. In addition, when it comes to networks that have
strict user privacy policies, there is a tiny amount of available public data. Thus, differentiating
between fake and valid profile pages has become quite tricky systematically before trusting a
possible association. In this piece of work, a way for distinguishing authentic accounts from false
accounts in a logical manner is proposed. The approach is based on the limited publicly accessible
account data on websites with strict privacy regulations, such as LinkedIn.
Social media’s growth can potentially raise people’s social evaluation and popularity. In particular,
social network users may gain popularity by amassing many likes, follows, and remarks. On the other
hand, establishing fake profiles is much too simple, and such accounts can be purchased online at little
cost. For instance, purchasing followers and comments on social media platforms such as Facebook
and Twitter may be done more easily on the internet. Analysis of activity changes is one of the
most common techniques open social networking methods use to spot strange accounts. The activities
that people engage in throughout time tend to shift and evolve. Therefore, the server can identify a
potential scam account by monitoring for sudden changes in access patterns to the content and
activity it requires. In case of unsuccessful identification, the deviant might fill the systems with
fake information demonstrates a common schema of fake account detection on social networks.
13. 4
1.2 LITERATURE SURVEY:
1. "Fraud Detection in Social Media: A Review" by Yang et al. (2019)
- This comprehensive review examines various techniques and methodologies employed in
fraud detection within social media platforms. The paper discusses traditional approaches, such
as rule-based and statistical methods, as well as emerging techniques, including machine learning
and neural network-based approaches. It provides insights into the challenges and opportunities
associated with fraud detection in social media, laying the groundwork for further research in the
field.
2. "Deep Learning for Fraud Detection: A Comprehensive Review" by Zhou et al. (2020)
- Focusing specifically on deep learning techniques, this review paper provides a detailed
overview of the application of neural networks in fraud detection across different domains,
including finance, e-commerce, and social media. The authors explore various deep learning
architectures, such as convolutional neural networks (CNNs), recurrent neural networks (RNNs),
and generative adversarial networks (GANs), highlighting their strengths and limitations in
detecting fraudulent activities. The paper also discusses challenges related to data imbalances,
interpretability, and scalability, offering valuable insights for researchers and practitioners.
3. "Detecting Fake Accounts in Online Social Networks at the Time of Registrations" by
Cao et al. (2019)
- This research article proposes a neural network-based approach for detecting fake accounts
on social media platforms at the time of registration. The authors leverage features extracted from
user profiles and registration activities to train a deep neural network classifier. Through
extensive experiments on real-world datasets, they demonstrate the effectiveness of their
approach in accurately identifying fraudulent profiles, thereby mitigating the proliferation of fake
accounts and improving the overall security of online social networks.
4. "Detecting Fake Accounts in Social Networks Using Deep Learning" by Sathyanarayana
et al. (2018)
- In this paper, the authors present a novel approach for detecting fake accounts in social
networks using deep learning techniques. They propose a deep neural network architecture that
combines convolutional and recurrent layers to capture both spatial and temporal patterns in user
14. 5
behavior. By training the model on a large-scale dataset comprising genuine and fake accounts,
they achieve high accuracy in distinguishing between the two, demonstrating the efficacy of their
approach in combating fraudulent activities on social media platforms.
5. "A Survey of Fraud Detection Techniques in Social Media Networks" by Al-Tairi et al.
(2020)
- This survey paper provides a comprehensive overview of fraud detection techniques
specifically tailored to social media networks. It covers a wide range of methodologies, including
traditional rule-based approaches, statistical techniques, machine learning algorithms, and deep
learning models. The authors discuss the advantages and limitations of each approach,
highlighting the role of neural networks in addressing the inherent challenges of detecting
fraudulent activities in dynamic and heterogeneous social media environments.
5 B. D. Freeman et al. (2015)
This paper focused on detecting the clusters of fake accounts rather than an individual. This
approach created a cluster, based on the features provided at the registration time such as
registration IP address and registration date. Random forest, SVM and Logistic regression are
used to train the model, and SVM is used to classify the cluster of accounts as fake or not.
Arlington, USA published a paper dedicated to Reverse Engineering Mobile applications. It uses
a technique to automatically Reverse Engineer Mobile Application User Interfaces (REMAUI).
On a given input REMAUI identifies user interface elements such as images, texts, containers,
and lists, via computer vision and optical character recognition (OCR) techniques. In this system
448 screenshots of Android and iOS applications were used, REMAUI generated user interfaces
were similar to the originals, both pixel by pixel and in terms of their runtime.
6 C. Krishna B Kansara et al. (2016)
This paper proposed a Sybil node discovery method based on the social graph. This approach
overcomes the limitations of the previous graph-based approaches by adding user behavioral
manners such as latent dealings and friendship refusal. The proposed design is divided into two
parts, Sybil node identification (SNI) and Sybil node identification using behavioral analysis
(SNI-B)
15. 6
7 D. Ali M. Meligy (2017)
This paper presents a technique to detect fake accounts on social networking sites called fake
profile recognizer. This technique is based on two methods i.e regular expression and
deterministic finite automata. A regular expression is used to authenticate the profiles and
deterministic automata recognize the identities in a trusted manner.
8 Samala Durga Prasad Reddy (2019)
Used a random forest classifier to detect fake accounts with 95% accuracy. Profile features like id,
name, status count, followers count, friends count, location, date of creation of the id, numbers of shares
done by the account, gender and language used by the account holder were used as features
for the classification process.
9 Rohit Raturi (2018)
Proposed 2 architectures for solving this issue. The first one uses NLP and it marks 2 or more
accounts as suspicious if they use the same IP or MAC address. In the second architecture
Support Vector Machine(SVM) is used for finding out accounts whichmake frequent use of
harmful words. These suspicious accounts then have to verify themselves.
16. 7
1.3MODULES:
1.Upload Social Network Profiles Dataset:
Using this module we will upload dataset to application
2.Preprocess Dataset:
Using this module we will apply processing technique such as removing missing values and then split
dataset into train and test where application use 80% dataset to train ANN and 20% dataset to test ANN
prediction accuracy
3.Run ANN Algorithm:
Using this module we will train ANN algorithm with train and test data and then train model will be
generated and we can use this train model to predict fake accounts from new dataset.
4.ANN Accuracy & Loss Graph:
To train ANN model we are taking 200 epoch/iterations and then in graph we will plot accuracy/loss
performance of ANN at each epoch/iteration.
5.Predict Fake/Genuine Profile using ANN:
using this module we will upload new test data and then apply ANN train model to predict whether test
data is genuine or fake.
17. 8
2. SYSTEM ANALYSIS
2.1 Existing System & its Disadvantages:
The concern about fake profile is protecting personal data or information from cyber attacks known as
phishing attacks. The cyber attackers are often used in stealing of information. In detecting of passwords,
sharing of irrelevant contents, raising awareness this type of profiles are involved in all unlawful activity.
In managing and taking the advantages of the critical situation this can be lead to the anonymity through
a longer way. For reducing incidents like trolling, hacking and cyber bullying this is need to be identified.
DISADVANTAGES
Data bias: The accuracy of the neural network model depends on quality and quantity of data used
for training. If the training data is biased, the model may not perform accurately.
False positives: The neural network model may sometimes identify legitimate profiles as
fake, leading to false positives.
Resource-intensive: Training a neural network model requires significant computing power and
resources, which can be costly.
Complexity: Building and training a neural network model requires specialized knowledge and
expertise, making it difficult for non-experts to replicate the project.
Privacy concerns: The use of neural networks to identify fake profiles may raise privacy concerns,
personal data is used to train the model. It is essential to ensure that user data is handled securely
and with consent.
18. 9
2.2 PROPOSED SYSTEM & ITS ADVANTAGES:
In regards to this, an "artificial neural network" system has been introduced as part of computer system.
It is designed for simulating in a way in which the human brain possesses and analyses information. The
inductive research approach can be considered for this type. In viewing the existing process and situations
this can be observed through the patterns and system regularities. In taking the technical advantage ANN
model need to be used effectively. It can be described as a foundation of artificial intelligence which will
solve the problem in proving the difficulty according to human standards. Therefore "artificial neural
networks" (ANNs) are introduced as a process of modeling, allowing the human nervous system through
learning technique.
ADVANTAGES
• Increased accuracy: Neural networks can identify patterns in large amounts of data, making them
effective at identifying fake profiles across multiple online social networks with a high level of accuracy.
• Scalability: The neural network model can be trained on a large dataset, making it possible to scale up
the project as the number of social networks grows.
• Real-time detection: The neural network can process data in real-time, making it possible to identify
fake profiles as they are created and take action immediately.
• Automation: Once the model is trained, the process of identifying fake profiles can be automated,
saving time and resources.
• Customizability: The neural network model can be customized to fit specific requirements, such as
identifying fake profiles that are targeting a particular demographic or geographic location.
19. 10
2.3 SYSTEM REQUIREMENTS:
HARDWARE REQUIREMENTS:
Processor i3 processor
Hard Disk 500 GB
RAM 4 GB
SOFTWARE REQUIREMENTS:
Operating System Windows 10/11
Programming Language Python 3.10
Domain ANN
Integrated Development Environment(IDE) Visual Studio Code
Front End Technologies HTML5,CSS3,Java Script
Back End Technologies Django
Database MySQL
Database Software WAMP or XAMPP Server
Web Server or Deployment Server Django Application Development Server
20. 11
3.SYSTEM STUDY
3.1 FEASIBILITY STUDY
The feasibility of the project is analyzed in this phase and business proposal is put forth with a very general plan
for the project and some cost estimates. During system analysis the feasibility study of the proposed system is
to be carried out. This is to ensure that proposed system is not a burden to the company. For feasibility analysis,
some understanding of the major requirements for the system is essential.
Three key considerations involved in the feasibility analysis are
ECONOMICAL FEASIBILITY
TECHNICAL FEASIBILITY
SOCIAL FEASIBILITY
ECONOMICAL FEASIBILITY
This study is carried out to check the economic impact that the system will have on organization. The amount of
fund that the company can pour into the research and development of system is limited. The expenditures must
be justified. Thus the developed system as well within budget and this was achieved because most of the
technologies used are freely available. Only the customized products had to be purchased.
TECHNICAL FEASIBILITY
This study is carried out to check the technical feasibility, that is, the technical requirements of the system.
Any system developed must not have a high demand on the available technical resources. This will lead to
high demands on the available technical resources. This will lead to high demands being placed on the client.
The developed system must have a modest requirement, as only minimal or null changes are required
for implementing this system.
SOCIAL FEASIBILITY
The aspect of study is to check the level of acceptance of the system by the user. This includes the process
of training the user to use the system efficiently. The user must not feel threatened by the system, instead must
accept it as a necessity. The level of acceptance by the users solely depends on the methods that are employed
to educate the user about the system and to make him familiar with it.
21. 12
4.SYSTEM DESIGN
4.1 SYSTEM ARCHITECTURE:
Fig.1
Select the
profile to be
tested
Extract the
attributes
required
Pass it through
the trained
classifier
Determine
real/fake
Get the
feedback of
the result
Train the
classifier using
the feedback
22. 13
4.2 UML DIAGRAMS:
UML stands for Unified Modeling Language. UML is a standardized general-purpose modeling
language in the field of object-oriented software engineering. The standard is managed, and
was created by, the Object Management Group. The goal is for UML to become a common
language for creating models of object oriented computer software. In its current form UML is
comprised of two major components: a Meta-model and a notation. In the future, some form of
method or process may also be added to or associated with, UML. The Unified Modeling
Language is a standard language for specifying, Visualization, Constructing and documenting
the artifacts of software system, as well as for business modeling and other nonsoftware
systems. The UML represents a collection of best engineering practices that have proven
successful in the modeling of large and complex systems. The UML is a very important part of
developing objects oriented software and the software development process. The UML uses
mostly graphical notations to express the design of software projects.
GOALS:
The Primary goals in the design of the UML are as follows:
1. Provide users a ready-to-use, expressive visual modeling Language so that they can
develop and exchange meaningful models.
2. Provide extendibility and specialization mechanisms to extend the core concepts.
3. Be independent of particular programming languages and development process.
4. Provide a formal basis for understanding the modeling language.
5. Encourage the growth of OO tools market.
6. Support higher level development concepts such as collaborations, frameworks,
patterns and components.
7. Integrate best practices.
23. 14
4.2.1 USE CASE DIAGRAM:
A use case diagram in the Unified Modeling Language (UML) is aA use case diagram in the Unified
Modeling Language (UML) is a type of behavioral diagram defined by and created from Use-case
analysis.Its purpose is to present a graphical overview of the functionality provided by a system in terms
ofactors,their goals (represented as use cases), and any dependencies between those use cases.
4.2.2 CLASS DIAGRAM:
In software engineering, a class diagram in the Unified Modeling Language (UML) is a type of
static structure diagram that describes the structure of a system by showing the system's classes,
their attributes, operations (or methods), and the relationships among the classes. It explains
which class contains information.
24. 15
4.2.3 SEQUENCE DIAGRAM:
A sequence diagram in Unified Modeling Language (UML) is a kind of interaction diagram
that shows how processes operate with one another and in what order. It is a construct of a
Message Sequence Chart. Sequence diagrams are sometimes called event diagrams, event
scenarios, and timing diagrams.
User Application
Upload Social Network Profiles Dataset
Preprocess Dataset
Run ANN Algorithm
ANN Accuracy & Loss Graph
Predict Fake/Genuine Profile using ANN
Logout
25. 16
4.2.4 COLLABORATION DIAGRAM:
A collaboration diagram, also referred to as a communication diagram, is a graphical
representation that illustrates the interactions and relationships between various objects or roles
within a system. It provides a visual depiction of how objects collaborate to accomplish specific
tasks or achieve particular functionalities within the system.
In a collaboration diagram:
1. Objects/Participants: Each object or participant involved in the system is depicted as a
rectangle or another suitable shape. These objects represent instances of classes or roles played
by entities within the system.
2. Messages: Communication between objects is represented by arrows, indicating the flow of
messages between them. Messages can be synchronous, denoted by solid lines, indicating direct
interactions where both sender and receiver are active simultaneously. Alternatively, messages
can be asynchronous,
User Applicati
on
1: Upload Social Network Profiles Dataset
2: Preprocess Dataset
3: Run ANN Algorithm
4: ANN Accuracy & Loss Graph
5: Predict Fake/Genuine Profile using ANN
6: Logout
26. 17
5.TECHNOLOGIES
5.1 WHAT IS PYTHION:
Below are some facts about Python.
Python is currently the most widely used multi-purpose, high-level programming language.
Python allows programming in Object-Oriented and Procedural paradigms. Python
programs generally are smaller than other programming languages like Java.
Programmers have to type relatively less and indentation requirement of the language,
makes them readable all the time.
Python language is being used by almost all tech-giant companies like – Google, Amazon,
Facebook, Instagram, Dropbox, Uber… etc.
The biggest strength of Python is huge collection of standard library which can be used for
the following –
• Machine Learning
• GUI Applications (like Kivy, Tkinter, PyQt etc. )
• Web frameworks like Django (used by YouTube, Instagram, Dropbox)
• Image processing (like Opencv, Pillow)
• Web scraping (like Scrapy, BeautifulSoup, Selenium)
• Test frameworks
• Multimedia
5.1.1 ADVANTAGES & DIADVANTAGES OF PYTHON:
Advantages of Python :-
Let’s see how Python dominates over other languages.
1. Extensive Libraries
Python downloads with an extensive library and it contain code for various purposes like
regular expressions, documentation-generation, unit-testing, web browsers, threading,
27. 18
databases, CGI, email, image manipulation, and more. So, we don’t have to write the complete
code for that manually.
2. Extensible
As we have seen earlier, Python can be extended to other languages. You can write some of
your code in languages like C++ or C. This comes in handy, especially in projects.
3. Embeddable
Complimentary to extensibility, Python is embeddable as well. You can put your Python code
in your source code of a different language, like C++. This lets us add scripting capabilities
to our code in the other language.
4. Improved Productivity
The language’s need to be in simplicity and extensive libraries render programmers more
productive than languages like Java and C++ do. Also, the fact that you need to write less and
get more things done.
5. IOT Opportunities
Since Python forms the basis of new platforms like Raspberry Pi, it finds the future bright for
the Internet Of Things. This is a way to connect the language with the real world.
6. Simple and Easy
When working with Java, you may have to create a class to print ‘Hello World’. But in
Python, just a print statement will do. It is also quite easy to learn, understand, and code.
This is why when people pick up Python, they have a hard time adjusting to other more
verbose languages like Java.
7. Readable
Because it is not such a verbose language, reading Python is much like reading English. This
is the reason why it is so easyto learn, understand, and code. It also does not need curlybraces
to define blocks, and indentation is mandatory. This further aids the readability ofthe code.
28. 19
8. Object-Oriented
This language supports both the procedural and object-oriented programming
paradigms. While functions help us with code reusability, classes and objects let us model
the real world. A class allows the encapsulation of data and functions into one.
9. Free and Open-Source
Like we said earlier, Python is freely available. But not only can you download Python
for free, but you can also download its source code, make changes to it, and even distribute
it. It downloads with an extensive collection of libraries to help you with your tasks.
10. Portable
When you code your project in a language like C++, you may need to make some changes
to it if you want to run it on another platform. But it isn’t the same with Python. Here, you
need to code only once, and you can run it anywhere. This is called Write Once Run
Anywhere (WORA). However, you need to be careful enough not to include any
systemdependent features.
11. Interpreted
Lastly, we will say that it is an interpreted language. Since statements are executed one by
one, debugging is easier than in compiled languages.
Advantages of Python Over Other Languages
1. Less Coding
Almost all of the tasks done in Python requires less coding when the same task is done in
other languages. Python also has an awesome standard library support, so you don’t have to
search for any third-party libraries to get your job done. This is the reason that many people
suggest learning Python to beginners.
29. 20
2. Affordable
Python is free therefore individuals, small companies or big organizations can leverage the
free available resources to build applications. Python is popular and widely used so it gives
you better community support.
The 2019 Github annual survey showed us that Python has overtaken Java in the most
popular programming language category.
3. Python is for Everyone
Python code can run on any machine whether it is Linux, Mac or Windows. Programmers
need to learn different languages for different jobs but with Python, you can professionally
build web apps, perform data analysis and machine learning, automate things, do web
scraping and also build games and powerful visualizations. It is an all-rounder programming
language.
Disadvantages of Python
So far, we’ve seen why Python is a great choice for your project. But if you choose it, you
should be aware of its consequences as well. Let’s now see the downsides of choosing Python
over another language.
1. Speed Limitations
We have seen that Python code is executed line by line. But since Python is interpreted, it
often results in slow execution. This, however, isn’t a problem unless speed is a focal point
for the project. In other words, unless high speed is a requirement, the benefits offered by
Python are enough to distract us from its speed limitations.
2. Weak in Mobile Computing and Browsers
While it serves as an excellent server-side language, Python is much rarely seen on the
client-side. Besides that, it is rarely ever used to implement smartphone-based
applications. One such application is called Carbonnelle.
The reason it is not so famous despite the existence of Brython is that it isn’t that secure.
30. 21
3. Design Restrictions
As you know, Python is dynamically-typed. This means that you don’t need to declare the
type of variable while writing the code. It uses duck-typing. But wait, what’s that? Well,
it just means that if it looks like a duck, it must be a duck. While this is easy on the
programmers during coding, it can raise run-time errors.
4. Underdeveloped Database Access Layers
Compared to more widely used technologies like JDBC (Java DataBase
Connectivity) and ODBC (Open DataBase Connectivity), Python’s database access layers
are a bit underdeveloped. Consequently, it is less often applied in huge enterprises.
5. Simple
No, we’re not kidding. Python’s simplicity can indeed be a problem. Take my example. I
don’t do Java, I’m more of a Python person. To me, its syntax is so simple that the verbosity
of Java code seems unnecessary.
This was all about the Advantages and Disadvantages of Python Programming Language.
5.1.2 HISTORY OF PYTHON:
What do the alphabet and the programming language Python have in common? Right, both
start with ABC. If we are talking about ABC in the Python context, it's clear that the
programming language ABC is meant. ABC is a general-purpose programming language
and programming environment, which had been developed in the Netherlands, Amsterdam,
at the CWI (Centrum Wiskunde &Informatica). The greatest achievement of ABC was to
influence the design of Python. Python was conceptualized in the late 1980s. Guido van
Rossum worked that time in a project at the CWI, called Amoeba, a distributed operating
system. In an interview with Bill Venners1
, Guido van Rossum said: "In the early 1980s, I
worked as an implementer on a team building a language called ABC at Centrum Wiskunde
en Informatica (CWI). I don't know how well people know ABC's influence on Python. I
try to mention ABC's influence because I'm indebted to everything I learned during that
project and to the people who worked on it. Later on in the same Interview, Guido van
Rossum continued: "I remembered all my experience and some of my frustration with
31. 22
ABC. I decided to try to design a simple scripting language that possessed some of ABC's
better properties, but without its problems. So I started typing. I created a simple virtual
machine, a simple parser, and a simple runtime. I made my own version of the various ABC
parts that I liked. I created a basic syntax, used indentation for statement grouping instead of
curly braces or begin-end blocks, and developed a small number of powerful data types: a
hash table (or dictionary, as we call it), a list, strings, and numbers."
5.2 WHAT IS MACHINE LEARNING:
Before we take a look at the details of various machine learning methods, let's start by
looking at what machine learning is, and what it isn't. Machine learning is often categorized
as a subfield of artificial intelligence, but I find that categorization can often be misleading
at first brush. The study of machine learning certainly arose from research in this context,
but in the data science application of machine learning methods, it's more helpful to think
of machine learning as a means of building models of data.
Fundamentally, machine learning involves building mathematical models to help
understand data. "Learning" enters the fray when we give these models tunable parameters
that can be adapted to observed data; in this way the program can be considered to be
"learning" from the data. Once these models have been fit to previously seen data, they can
be used to predict and understand aspects of newly observed data. I'll leave to the reader
the more philosophical digression regarding the extent to which this type of mathematical,
model-based "learning" is similar to the "learning" exhibited by the human brain.
Understanding the problem setting in machine learning is essential to using these tools
effectively, and so we will start with some broad categorizations of the types of approaches
we'll discuss here.
5.2.1 Categories Of Machine Leaning:
At the most fundamental level, machine learning can be categorized into two main types:
supervised learning and unsupervised learning.
Supervised learning involves somehow modeling the relationship between measured
features of data and some label associated with the data; once this model is determined, it
can be used to apply labels to new, unknown data. This is further subdivided into
32. 23
classification tasks and regression tasks: in classification, the labels are discrete categories,
while in regression, the labels are continuous quantities. We will see examples of both
types of supervised learning in the following section.
Unsupervised learning involves modeling the features of a dataset without reference to any
label, and is often described as "letting the dataset speak for itself." These models include
tasks such as clustering and dimensionality reduction. Clustering algorithms identify
distinct groups of data, while dimensionality reduction algorithms search for more succinct
representations of the data. We will see examples of both types of unsupervised learning
in the following section.
5.2.2 Need for Machine Learning:
Human beings, at this moment, are the most intelligent and advanced species on earth
because they can think, evaluate and solve complex problems. On the other side, AI is still
in its initial stage and haven’t surpassed human intelligence in many aspects. Then the
question is that what is the need to make machine learn? The most suitable reason for doing
this is, “to make decisions, based on data, with efficiency and scale”.
Lately, organizations are investing heavily in newer technologies like Artificial
Intelligence, Machine Learning and Deep Learning to get the key information from data to
perform several real-world tasks and solve problems. We can call it data-driven decisions
taken by machines, particularly to automate the process. These data-driven decisions can
be used, instead of using programing logic, in the problems that cannot be programmed
inherently. The fact is that we can’t do without human intelligence, but other aspect is that
we all need to solve real-world problems with efficiency at a huge scale. That is why the
need for machine learning arises.
5.2.3 Challenges in Machines Learning:
While Machine Learning is rapidly evolving, making significant strides with cybersecurity
and autonomous cars, this segment of AI as whole still has a long way to go. The reason
behind is that ML has not been able to overcome number of challenges. The challenges
that ML is facing currently are −
33. 24
Quality of data − Having good-quality data for ML algorithms is one of the biggest
challenges. Use of low-quality data leads to the problems related to data preprocessing and
feature extraction.
Time-Consuming task − Another challenge faced by ML models is the consumption of time
especially for data acquisition, feature extraction and retrieval.
Lack of specialist persons − As ML technology is still in its infancy stage, availability of
expert resources is a tough job.
No clear objective for formulating business problems − Having no clear objective and
well -defined goal for business problems is another key challenge for ML because this
technology is not that mature yet.
Issue of overfitting & underfitting − If the model is overfitting or underfitting, it cannot be
represented well for the problem.
Curse of dimensionality − Another challenge ML model faces is too many features of data
points. This can be a real hindrance.
Difficulty in deployment − Complexity of the ML model makes it quite difficult to be
deployed in real life.
5.2.4 Applications of Machines Learning :
Machine Learning is the most rapidly growing technology and according to researchers we
are in the golden year of AI and ML. It is used to solve many real-world complex problems
which cannot be solved with traditional approach. Following are some real-world
applications of ML −
5.2.4.1 Emotion analysis
5.2.4.2 Sentiment analysis
5.2.4.3 Error detection and prevention
5.2.4.4 Weather forecasting and prediction
5.2.4.5 Stock market analysis and forecasting
5.2.4.6 Speech synthesis
5.2.4.7 Speech recognition
5.2.4.8 Customer segmentation
34. 25
5.2.4.9 Object recognition
5.2.4.10 Fraud detection
5.2.4.11 Fraud prevention
5.2.4.12 Recommendation of products to customer in online shopping
5.2.5 How to Start Learning Machine Learning?
Arthur Samuel coined the term “Machine Learning” in 1959 and defined it as a “Field of
study that gives computers the capability to learn without being explicitly
programmed”.
And that was the beginning of Machine Learning! In modern times, Machine Learning is one
of the most popular (if not the most!) career choices. According to Indeed, Machine Learning
Engineer Is The Best Job of 2019 with a 344% growth and an average base salary of $146,085
per year.
But there is still a lot of doubt about what exactly is Machine Learning and how to start
learning it? So this article deals with the Basics of Machine Learning and also the path you
can follow to eventually become a full-fledged Machine Learning Engineer. Now let’s get
started!!!
5.2.5 How to start learning ML?
This is a rough roadmap you can follow on your way to becoming an insanely talented
Machine Learning Engineer. Of course, you can always modify the steps according to your
needs to reach your desired end-goal!
Step 1 – Understand the Prerequisites
In the case, you are a genius, you could start ML directly but normally, there are some
prerequisites that you need to know which include Linear Algebra, Multivariate Calculus,
Statistics, and Python. And if you don’t know these, never fear! You don’t need Ph.D.degree
in these topics to get started but you do need a basic understanding.
(a) Learn Linear Algebra and Multivariate Calculus
Both Linear Algebra and Multivariate Calculus are important in Machine Learning.
However, the extent to which you need them depends on your role as a data scientist. If you
35. 26
are more focused on application heavy machine learning, then you will not be that heavily
focused on maths as there are many common libraries available. But if you want to focus on
R&D in Machine Learning, then mastery of Linear Algebra and Multivariate Calculus is very
important as you will have to implement many ML algorithms from scratch.
(b) Learn Statistics
Data plays a huge role in Machine Learning. In fact, around 80% of your time as an ML
expert will be spent collecting and cleaning data. And statistics is a field that handles the
collection, analysis, and presentation of data. So it is no surprise that you need to learn it!!!
Some of the key concepts in statistics that are important are Statistical Significance,
Probability Distributions, Hypothesis Testing, Regression, etc. Also, Bayesian Thinking is
also a very important part of ML which deals with various concepts like Conditional
Probability, Priors, and Posteriors, Maximum Likelihood, etc.
(c) Learn Python
Some people prefer to skip Linear Algebra, Multivariate Calculus and Statistics and learn
them as they go along with trial and error. But the one thing that you absolutely cannot skip
is Python! While there are other languages you can use for Machine Learning like R, Scala,
etc. Python is currently the most popular language for ML. In fact, there are many Python
libraries that are specifically useful for Artificial Intelligence and Machine Learning such
as Keras, TensorFlow, Scikit-learn, etc. So if you want to learn ML, it’s best if you learn
Python! You can do that using various online resources and courses such as Fork Python
available Free on GeeksforGeeks.
Step 2 – Learn Various ML Concepts
Now that you are done with the prerequisites, you can move on to actually learning ML
(Which is the fun part!!!) It’s best to start with the basics and then move on to more
complicated stuff. Some of the basic concepts in ML are:
36. 27
(a) Terminologies of Machine Learning
• Model – A model is a specific representation learned from data by applying some
machine learning algorithm. A model is also called a hypothesis.
• Feature – A feature is an individual measurable property of the data. A set of numeric
features can be conveniently described by a feature vector. Feature vectors are fed as
input to the model. For example, in order to predict a fruit, there may be features like
color, smell, taste, etc.
• Target (Label) – A target variable or label is the value to be predicted by our model.
For the fruit example discussed in the feature section, the label with each set of input
would be the name of the fruit like apple, orange, banana, etc.
• Training – The idea is to give a set of inputs(features) and it’s expected outputs(labels),
so after training, we will have a model (hypothesis) that will then map new data to one
of the categories trained on.
• Prediction – Once our model is ready, it can be fed a set of inputs to which it will
provide a predicted output(label).
(b) Types of Machine Learning
• Supervised Learning – This involves learning from a training dataset with labeled
data using classification and regression models. This learning process continues until
the required level of performance is achieved.
• Unsupervised Learning – This involves using unlabelled data and then finding the
underlying structure in the data in order to learn more and more about the data itself
using factor and cluster analysis models.
• Semi-supervised Learning – This involves using unlabelled data like Unsupervised
Learning with a small amount of labeled data. Using labeled data vastly increases the
learning accuracy and is also more cost-effective than Supervised Learning.
• Reinforcement Learning – This involves learning optimal actions through trial and
error. So the next action is decided by learning behaviors that are based on the current
state and that will maximize the reward in the future.
37. 28
5.2.6 ADVANTAGES & DISADVANTAGES OF ML
Advantages of Machine learning :-
1. Easily identifies trends and patterns -
Machine Learning can review large volumes of data and discover specific trends and patterns
that would not be apparent to humans. For instance, for an e-commerce website like Amazon,
it serves to understand the browsing behaviors and purchase histories of its users to help cater
to the right products, deals, and reminders relevant to them. It uses the results to reveal relevant
advertisements to them.
2. No human intervention needed (automation)
With ML, you don’t need to babysit your project every step of the way. Since it means giving
machines the ability to learn, it lets them make predictions and also improve the algorithms
on their own. A common example of this is anti-virus softwares. they learn to filter new threats
as they are recognized. ML is also good at recognizing spam.
3. Continuous Improvement
As ML algorithms gain experience, they keep improving in accuracy and efficiency. This
lets them make better decisions. Say you need to make a weather forecast model. As the
amount of data you have keeps growing, your algorithms learn to make more accurate
predictions faster.
4. Handling multi-dimensional and multi-variety data
Machine Learning algorithms are good at handling data that are multi-dimensional and
multivariety, and they can do this in dynamic or uncertain environments.
5. Wide Applications
You could be an e-tailer or a healthcare provider and make ML work for you. Where it does
apply, it holds the capability to help deliver a much more personal experience to customers
while also targeting the right customers.
38. 29
Disadvantages of Machine Learning :-
1. Data Acquisition
Machine Learning requires massive data sets to train on, and these should be
inclusive/unbiased, and of good quality. There can also be times where they must wait for
new data to be generated.
2. Time and Resources
ML needs enough time to let the algorithms learn and develop enough to fulfill their purpose
with a considerable amount of accuracy and relevancy. It also needs massive resources to
function. This can mean additional requirements of computer power for you.
3. Interpretation of Results
Another major challenge is the ability to accurately interpret results generated by the
algorithms. You must also carefully choose the algorithms for your purpose.
4. High error-susceptibility
Machine Learning is autonomous but highly susceptible to errors. Suppose you train an
algorithm with data sets small enough to not be inclusive. You end up with biased predictions
coming from a biased training set. This leads to irrelevant advertisements being displayed to
customers. In the case of ML, such blunders can set off a chain of errors that can go undetected
for long periods of time. And when they do get noticed, it takes quite some time to recognize
the source of the issue, and even longer to correct it.
Human beings, at this moment, are the most intelligent and advanced species on earth
because they can think, evaluate and solve complex problems. On the other side, AI is still
in its initial stage and haven’t surpassed human intelligence in many aspects. Then the
question is that what is the need to make machine learn? The most suitable reason for doing
this is, “to make decisions, based on data, with efficiency and scale”.
Lately, organizations are investing heavily in newer technologies like Artificial
Intelligence, Machine Learning and Deep Learning to get the key information from data to
perform several real-world tasks and solve problems. We can call it data-driven decisions
taken by machines, particularly to automate the process. These data-driven decisions can
39. 30
be used, instead of using programing logic, in the problems that cannot be programmed
inherently. The fact is that we can’t do without human intelligence, but other aspect is that
we all need to solve real-world problems with efficiency at a huge scale. That is why the
need for machine learning arises.
Fundamentally, machine learning involves building mathematical models to help
understand data. "Learning" enters the fray when we give these models tunable parameters
that can be adapted to observed data; in this way the program can be considered to be
"learning" from the data. Once these models have been fit to previously seen data, they can
be used to predict and understand aspects of newly observed data. I'll leave to the reader
the more philosophical digression regarding the extent to which this type of mathematical,
model-based "learning" is similar to the "learning" exhibited by the human brain.
Understanding the problem setting in machine learning is essential to using these tools
effectively, and so we will start with some broad categorizations of the types of approaches
we'll discuss here.
Some people prefer to skip Linear Algebra, Multivariate Calculus and Statistics and learn
them as they go along with trial and error. But the one thing that you absolutely cannot skip
is Python! While there are other languages you can use for Machine Learning like R, Scala,
etc. Python is currently the most popular language for ML. In fact, there are many Python
libraries that are specifically useful for Artificial Intelligence and Machine Learning such
as Keras, TensorFlow, Scikit-learn, etc. So if you want to learn ML, it’s best if you learn
Python! You can do that using various online resources and courses such as Fork Python
available Free on GeeksforGeeks.
Arthur Samuel coined the term “Machine Learning” in 1959 and defined it as a “Field of
study that gives computers the capability to learn without being explicitly
programmed”.
And that was the beginning of Machine Learning! In modern times, Machine Learning is one
of the most popular (if not the most!) career choices. According to Indeed, Machine Learning
Engineer Is The Best Job of 2019 with a 344% growth and an average base salary of $146,085
per year.
But there is still a lot of doubt about what exactly is Machine Learning and how to start
learning it? So this article deals with the Basics of Machine Learning and also the path you
can follow to eventually become a full-fledged Machine Learning Engineer.
40. 31
• Supervised Learning – This involves learning from a training dataset with labeled
data using classification and regression models. This learning process continues until
the required level of performance is achieved.
• Unsupervised Learning – This involves using unlabelled data and then finding the
underlying structure in the data in order to learn more and more about the data itself
using factor and cluster analysis models.
• Semi-supervised Learning – This involves using unlabelled data like Unsupervised
Learning with a small amount of labeled data. Using labeled data vastly increases the
learning accuracy and is also more cost-effective than Supervised Learning.
• Reinforcement Learning – This involves learning optimal actions through trial and
error. So the next action is decided by learning behaviors that are based on the current
state and that will maximize the reward in the future.
Machine Learning is autonomous but highly susceptible to errors. Suppose you train an
algorithm with data sets small enough to not be inclusive. You end up with biased predictions
coming from a biased training set. This leads to irrelevant advertisements being displayed to
customers. In the case of ML, such blunders can set off a chain of errors that can go undetected
for long periods of time. And when they do get noticed, it takes quite some time to recognize
the source of the issue, and even longer to correct it. Machine Learning requires massive data
sets to train on, and these should be inclusive/unbiased, and of good quality. There can also
be times where they must wait for new data to be generated. The fact is that we can’t do
without human intelligence, but other aspect is thatwe all need to solve real-world problems
with efficiency at a huge scale.
5.3 PYTHON DEVELOPMENT STEPS:
Guido Van Rossum published the first version of Python code (version 0.9.0) at alt.sources
in February 1991. This release included already exception handling, functions, and the core
data types of list, dict, str and others. It was also object oriented and had a module system.
Python version 1.0 was released in January 1994. The major new features included in this
release were the functional programming tools lambda, map, filter and reduce, which Guido
Van Rossum never liked.Six and a half years later in October 2000, Python 2.0
41. 32
introduced. This release included list comprehensions, a full garbage collector and it was
supporting Unicode Python flourished for another 8 years in the versions 2.x before the next
major release as Python 3.0 (also known as "Python 3000" and "Py3K") was released. Python3 is
not backwards compatible with Python 2.x. The emphasis in Python 3 had been on the removal
of duplicate programming constructs and modules, thus fulfilling or coming close to fulfilling
the 13th law of the Zen of Python: "There should be one -- and preferably only one -- obvious
way to do it. Some changes in Python 7.3:
5.3.2 Print is now a function
5.3.3 Views and iterators instead of lists
5.3.4 The rules for ordering comparisons have been simplified. E.g. a heterogeneous list cannot
be sorted, because all the elements of a list must be comparable to each other.
5.3.5 There is only one integer type left, i.e. int. long is int as well.
5.3.6 The division of two integers returns a float instead of an integer. "//" can be used to have
the "old" behaviour.
5.3.7 Text Vs. Data Instead Of Unicode Vs. 8-bit
Purpose :-
We demonstrated that our approach enables successful segmentation of intra-retinal
layers—even with low-quality images containing speckle noise, low contrast, and different
intensity ranges throughout—with the assistance of the ANIS feature.
Python
Python is an interpreted high-level programming language for general-purpose
programming. Created by Guido van Rossum and first released in 1991, Python has a
design philosophy that emphasizes code readability, notably using significant whitespace.
Python features a dynamic type system and automatic memory management. It supports
multiple programming paradigms, including object-oriented, imperative, functional and
procedural, and has a large and comprehensive standard library.
5.3.8 Python is Interpreted − Python is processed at runtime by the interpreter. You do not need
to compile your program before executing it. This is similar to PERL and PHP.
5.3.9 Python is Interactive − you can actually sit at a Python prompt and interact with the
interpreter directly to write your programs.
42. 33
5.3.10 Python also acknowledges that speed of development is important. Readable and terse
code is part of this, and so is access to powerful constructs that avoid tedious repetitionof
code. Maintainability also ties into this may be an all but useless metric, but it does say
something about how much code you have to scan, read and/or understand to troubleshoot
problems or tweak behaviors. This speed of development, the ease with which a
programmer of other languages can pick up basic Python skills and the huge standard
library is key to another area where Python excels. All its tools have been quick to
implement, saved a lot of time, and several of them have later been patched and updated
by people with no Python background - without breaking.
5.4 MODULES USED IN PYTHON:
Tensorflow
TensorFlow is a free and open-source software library for dataflow and differentiable
programming across a range of tasks. It is a symbolic math library, and is also used for
machine learning applications such as neural networks. It is used for both research and
production at Google.
TensorFlow was developed by the Google Brain team for internal Google use. It was released
under the Apache 2.0 open-source license on November 9, 2015.
Numpy
Numpy is a general-purpose array-processing package. It provides a high-performance
multidimensional array object, and tools for working with these arrays.
It is the fundamental package for scientific computing with Python. It contains various
features including these important ones:
• A powerful N-dimensional array object
• Sophisticated (broadcasting) functions
• Tools for integrating C/C++ and Fortran code
• Useful linear algebra, Fourier transform, and random number capabilities
• Besides its obvious scientific uses, Numpy can also be used as an efficient
multidimensional container of generic data. Arbitrary data-types can be defined using
Numpy which allows Numpy to seamlessly and speedily integrate with a wide varieties.
43. 34
Pandas
Pandas is an open-source Python Library providing high-performance data manipulation
and analysis tool using its powerful data structures. Python was majorly used for data
munging and preparation. It had very little contribution towards data analysis. Pandas
solved this problem. Using Pandas, we can accomplish five typical steps in the processing
and analysis of data, regardless of the origin of data load, prepare, manipulate, model, and
analyze. Python with Pandas is used in a wide range of fields including academic and
commercial domains including finance, economics, Statistics, analytics, etc.
Matplotlib
Matplotlib is a Python 2D plotting library which produces publication quality figures in a
variety of hardcopy formats and interactive environments across platforms. Matplotlib can
be used in Python scripts, the Python and IPython shells, the Jupyter Notebook, web
application servers, and four graphical user interface toolkits. Matplotlib tries to make easy
things easy and hard things possible. You can generate plots, histograms, power spectra,
bar charts, error charts, scatter plots, etc., with just a few lines of code. For examples, see
the sample plots and thumbnail gallery.
For simple plotting the pyplot module provides a MATLAB-like interface, particularly
when combined with IPython. For the power user, you have full control of line styles, font
properties, axes properties, etc. via an object oriented interface or via a set of functions
familiar to MATLAB users.
Scikit – learn
Scikit-learn provides a range of supervised and unsupervised learning algorithms via a
consistent interface in Python. It is licensed under a permissive simplified BSD license and
is distributed under many Linux distributions, encouraging academic and commercial use.
Python
Python is an interpreted high-level programming language for general-purpose
programming. Created by Guido van Rossum and first released in 1991, Python has a
design philosophy that emphasizes code readability, notably using significant whitespace.
44. 35
Python features a dynamic type system and automatic memory management. It supports
multiple programming paradigms, including object-oriented, imperative, functional and
procedural, and has a large and comprehensive standard library.
• Python is Interpreted − Python is processed at runtime by the interpreter. You do not
need to compile your program before executing it. This is similar to PERL and PHP.
• Python is Interactive − you can actually sit at a Python prompt and interact with the
interpreter directly to write your programs.
Python also acknowledges that speed of development is important. Readable and terse code
is part of this, and so is access to powerful constructs that avoid tedious repetition of code.
Maintainability also ties into this may be an all but useless metric, but it does say something
about how much code you have to scan, read and/or understand to troubleshoot problems
or tweak behaviors. This speed of development, the ease with which a programmer of other
languages can pick up basic Python skills and the huge standard library is key to another
area where Python excels. All its tools have been quick to implement, saved a lot of time,
and several of them have later been patched and updated by people with no Python
background - without breaking.
Disadvantages of Python
So far, we’ve seen why Python is a great choice for your project. But if you choose it, you
should be aware of its consequences as well. Let’s now see the downsides of choosing Python
over another language.
1. Speed Limitations
We have seen that Python code is executed line by line. But since Python is interpreted, it
often results in slow execution. This, however, isn’t a problem unless speed is a focal point
for the project. In other words, unless high speed is a requirement, the benefits offered by
Python are enough to distract us from its speed limitations.
2. Weak in Mobile Computing and Browsers
While it serves as an excellent server-side language, Python is much rarely seen on the
client-side. Besides that, it is rarely ever used to implement smartphone-based
applications. One such application is called Carbonnelle.
The reason it is not so famous despite the existence of Brython is that it isn’t that secure.
1. Design Restrictions
45. 36
As you know, Python is dynamically-typed. This means that you don’t need to declare the
type of variable while writing the code. It uses duck-typing. But wait, what’s that? Well,
it just means that if it looks like a duck, it must be a duck. While this is easy on the
programmers during coding, it can raise run-time errors.
2. Underdeveloped Database Access Layers
Compared to more widely used technologies like JDBC (Java DataBase
Connectivity) and ODBC (Open DataBase Connectivity), Python’s database access layers
are a bit underdeveloped. Consequently, it is less often applied in huge enterprises.
3. Simple
No, we’re not kidding. Python’s simplicity can indeed be a problem. Take my example. I
don’t do Java, I’m more of a Python person. To me, its syntax is so simple that the verbosity
of Java code seems unnecessary.
This was all about the Advantages and Disadvantages of Python Programming Language.
5.5 INSTALL PYTHON STEP-BY-STEP IN WINDOWS AND MAC:
Python a versatile programming language doesn’t come pre-installed on your computer
devices. Python was first released in the year 1991 and until today it is a very popular
high-level programming language. Its style philosophy emphasizes code readability
with its notable use of great whitespace.
The object-oriented approach and language construct provided by Python enables
programmers to write both clear and logical code for projects. This software does not
come pre-packaged with Windows.
How to Install Python on Windows and Mac :
There have been several updates in the Python version over the years. The question is
how to install Python? It might be confusing for the beginner who is willing to start learning
Python but this tutorial will solve your query. The latest or the newest version of Python is
version:
46. 37
3.7.4 or in other words, it is Python 3.
Note: The python version 3.7.4 cannot be used on Windows XP or earlier devices.
Before you start with the installation process of Python. First, you need to know about your
System Requirements. Based on your system type i.e. operating system and based
processor, you must download the python version. My system type is a Windows 64-bit
operating system. So the steps below are to install python version 3.7.4 on Windows 7
device or to install Python 3. Download the Python Cheatsheet here. The steps on how to
install Python on Windows 10, 8 and 7 are divided into 4 parts to help understand better.
Download the Correct version into the system
Step 1: Go to the official site to download and install python using Google Chrome or
any other web browser. OR Click on the following link: https://www.python.org
Now, check for the latest and the correct version for your operating system.
Step 2: Click on the Download Tab.
47. 38
Step 3: You can either select the Download Python for windows 3.7.4 button in Yellow
Color or you can scroll further down and click on download with respective to their version.
Here, we are downloading the most recent python version for windows 3.7.4
Step 4: Scroll down the page until you find the Files option.
Step 5: Here you see a different version of python along with the operating system.
48. 39
• To download Windows 32-bit python, you can select any one from the three options:
Windows x86 embeddable zip file, Windows x86 executable installer or Windows x86
webbased installer.
•To download Windows 64-bit python, you can select any one from the three options:
Windows x86-64 embeddable zip file, Windows x86-64 executable installer or Windows
x8664 web-based installer.
Here we will install Windows x86-64 web-based installer. Here your first part regarding
which version of python is to be downloaded is completed. Now we move ahead with the
second part in installing python i.e. Installation
Note: To know the changes or updates that are made in the version you can click on the
Release Note Option.
Installation of Python
Step 1: Go to Download and Open the downloaded python version to carry out the
installation process.
49. 40
Step 2: Before you click on Install Now, Make sure to put a tick on Add Python 3.7 to PATH.
Step 3: Click on Install NOW After the installation is successful. Click on Close.
50. 41
With these above three steps on python installation, you have successfully and correctly
installed Python. Now is the time to verify the installation. Note: The installation process
might take a couple of minutes.
Verify the Python Installation
Step 1: Click on Start
Step 2: In the Windows Run Command, type “cmd”.
51. 42
Step 3: Open the Command prompt option.
Step 4: Let us test whether the python is correctly installed. Type python –V and press Enter.
Step 5: You will get the answer as 3.7.4
Note: If you have any of the earlier versions of Python already installed. You must first
uninstall the earlier version and then install the new one.
Check how the Python IDLE works
Step 1: Click on Start
Step 2: In the Windows Run command, type “python idle”.
Step 3: Click on IDLE (Python 3.7 64-bit) and launch the program
Step 4: To go ahead with working in IDLE you must first save the file. Click on File > Click
on Save
52. 43
Step 5: Name the file and save as type should be Python files. Click on SAVE. Here I have
named the files as Hey World.
Step 6: Now for e.g. enter print
53. 44
6.IMPLEMENTATIONS
6.1 SOFTWARE ENVIRONMENT
6.1.1 PYTHON
Python is a general-purpose interpreted, interactive, object-oriented, and high-level
programming language. An interpreted language, Python has a design philosophy that
emphasizes code readability (notably using whitespace indentation to delimit code blocks rather
than curly brackets or keywords), and a syntax that allows programmers to express concepts in
fewer lines of code than might be used in languages such as C++or Java. It provides constructs
that enable clear programming on both small and large scales. Python interpreters are available
for many operating systems. C, Python, the reference implementation of Python, is open source
software and has a community-based development model, as do nearly all of its variant
implementations. C, Python is managed by the non-profit Python Software Foundation. Python
features a dynamic type system and automatic memory management. Interactive Mode
Programming.
SAMPLE CODE
from tkinter import messagebox
from tkinter import *
from tkinter import simpledialog
import tkinter
import matplotlib.pyplot as plt
import numpy as np
from tkinter import ttk
from tkinter import filedialog
import pandas as pd
from sklearn.model_selection import train_test_split
from keras.models import Sequential
from keras.layers.core import Dense,Activation,Dropout
from keras.callbacks import EarlyStopping
from sklearn.preprocessing import OneHotEncoder
from keras.optimizers import Adam
from keras.utils.np_utils import to_categorical
54. 45
main = Tk()
main.title("IDENTIFING OF FAKE PROFILES ACROSS ONLINE SOCIAL NETWORKS BY
USING NEURAL NETWORK")
main.geometry("1300x1200")
main.config(bg="lightgreen")
global filename
global X, Y
global X_train, X_test, y_train, y_test
global accuracy
global dataset
global model
def loadProfileDataset():
global filename
global dataset
outputarea.delete('1.0', END)
filename = filedialog.askopenfilename(initialdir="Dataset")
outputarea.insert(END,filename+" loadednn")
dataset = pd.read_csv(filename)
outputarea.insert(END,str(dataset.head()))
def preprocessDataset():
global X, Y
global dataset
global X_train, X_test, y_train, y_test
outputarea.delete('1.0', END)
X = dataset.values[:, 0:8]
Y = dataset.values[:, 8]
indices = np.arange(X.shape[0])
np.random.shuffle(indices)
X = X[indices]
55. 46
Y = Y[indices]
Y = to_categorical(Y)
X_train, X_test, y_train, y_test = train_test_split(X, Y, test_size=0.2)
outputarea.insert(END,"nnDataset contains total profiles : "+str(len(X))+"n")
outputarea.insert(END,"Total profiles used to train ANN algorithm : "+str(len(X_train))+"n")
outputarea.insert(END,"Total profiles used to test ANN algorithm : "+str(len(X_test))+"n")
def executeANN():
global model
outputarea.delete('1.0', END)
global X_train, X_test, y_train, y_test
global accuracy
model = Sequential()
model.add(Dense(200, input_shape=(8,), activation='relu', name='fc1'))
model.add(Dense(200, activation='relu', name='fc2'))
model.add(Dense(2, activation='softmax', name='output'))
optimizer = Adam(lr=0.001)
model.compile(optimizer, loss='categorical_crossentropy', metrics=['accuracy'])
print('ANN Neural Network Model Summary: ')
print(model.summary())
hist = model.fit(X_train, y_train, verbose=2, batch_size=5, epochs=200)
results = model.evaluate(X_test, y_test)
ann_acc = results[1] * 100
print(ann_acc)
accuracy = hist.history
acc = accuracy['accuracy']
acc = acc[199] * 100
outputarea.insert(END,"ANN model generated and its prediction accuracy is : "+str(acc)+"n")
def graph():
global accuracy
acc = accuracy['accuracy']
56. 47
loss = accuracy['loss']
plt.figure(figsize=(10,6))
plt.grid(True)
plt.xlabel('Iterations')
plt.ylabel('Accuracy/Loss')
plt.plot(acc, 'ro-', color = 'green')
plt.plot(loss, 'ro-', color = 'blue')
plt.legend(['Accuracy', 'Loss'], loc='upper left')
#plt.xticks(wordloss.index)
plt.title('ANN Iteration Wise Accuracy & Loss Graph')
plt.show()
def predictProfile():
outputarea.delete('1.0', END)
global model
filename = filedialog.askopenfilename(initialdir="Dataset")
test = pd.read_csv(filename)
test = test.values[:, 0:8]
predict = model.predict_classes(test)
print(predict)
for i in range(len(test)):
msg = ''
if str(predict[i]) == '0':
msg = "Given Account Details Predicted As Genuine"
if str(predict[i]) == '1':
msg = "Given Account Details Predicted As Fake"
outputarea.insert(END,str(test[i])+" "+msg+"nn")
def close():
main.destroy()
font = ('times', 15, 'bold')
title = Label(main, text='IDENTIFING OF FAKE PROFILES ACROSS ONLINE SOCIAL
59. 50
7.SYSTEM TESTING
INTRODUCTION TO TESTNG
The purpose of testing is to discover errors. Testing is the process of trying to discover every
conceivable fault or weakness in a work product. It provides a way to check the functionality
of components, sub assemblies, assemblies and/or a finished product It is the process of
exercising software with the intent of ensuring that the Software system meets its requirements
and user expectations and does not fail in an unacceptable manner. There are various types of
test. Each test type addresses a specific testing requirement.
TYPES OF TESTS
Unit testing
Unit testing involves the design of test cases that validate that the internal program logic is
functioning properly, and that program inputs produce valid outputs. All decision branches and
internal code flow should be validated. It is the testing of individual software units of the
application .it is done after the completion of an individual unit before integration. This is a
structural testing, that relies on knowledge of its construction and is invasive. Unit tests perform
basic tests at component level and test a specific business process, application, and/or system
configuration. Unit tests ensure that each unique path of a business process performs accurately
to the documented specifications and contains clearly defined inputs and expected results.
Integration testing
Integration tests are designed to test integrated software components to determine if they
actually run as one program. Testing is event driven and is more concerned with the basic
outcome of screens or fields. Integration tests demonstrate that although the components were
individually satisfaction, as shown by successfully unit testing, the combination of components
is correct and consistent. Integration testing is specifically aimed at exposing the problems that
arise from the combination of components.
60. 51
Functional test
Functional tests provide systematic demonstrations that functions tested are available as
specified by the business and technical requirements, system documentation, and user manuals.
Functional testing is centered on the following items:
Valid Input : identified classes of valid input must be accepted.
Invalid Input : identified classes of invalid input must be rejected.
Functions : identified functions must be exercised.
Output : identified classes of application outputs must be exercised.
Systems/Procedures : interfacing systems or procedures must be invoked.
Organization and preparation of functional tests is focused on requirements, key functions, or
special test cases. In addition, systematic coverage pertaining to identify Business process
flows; data fields, predefined processes, and successive processes must be considered for
testing. Before functional testing is complete, additional tests are identified and the effective
value of current tests is determined.
System Test
System testing ensures that the entire integrated software system meets requirements. It tests a
configuration to ensure known and predictable results. An example of system testing is the
configuration oriented system integration test. System testing is based on process descriptions
and flows, emphasizing pre-driven process links and integration points.
White Box Testing
White Box Testing is a testing in which in which the software tester has knowledge of the inner
workings, structure and language of the software, or at least its purpose. It is purpose. It is used
to test areas that cannot be reached from a black box level.
Black Box Testing
Black Box Testing is testing the software without any knowledge of the inner workings,
structure or language of the module being tested. Black box tests, as most other kinds of tests,
must be written from a definitive source document, such as specification or requirements
document, such as specification or requirements document. It is a testing in which the software
61. 52
under test is treated, as a black box .you cannot “see” into it. The test provides inputs and
responds to outputs without considering how the software works.
Unit Testing
Unit testing is usually conducted as part of a combined code and unit test phase of the software
lifecycle, although it is not uncommon for coding and unit testing to be conducted as two
distinct phases.
TESTING STRATEGIES
Field testing will be performed manually and functional tests will be written in detail.
Test objectives
• All field entries must work properly.
• Pages must be activated from the identified link.
• The entry screen, messages and responses must not be delayed.
Features to be tested
• Verify that the entries are of the correct format
• No duplicate entries should be allowed
• All links should take the user to the correct page.
Integration Testing
Software integration testing is the incremental integration testing of two or more integrated
software components on a single platform to produce failures caused by interface defects.
The task of the integration test is to check that components or software applications, e.g.
components in a software system or – one step up – software applications at the company level
– interact without error.
Test Results: All the test cases mentioned above passed successfully. No defects encountered.
Acceptance Testing
User Acceptance Testing is a critical phase of any project and requires significant participation
by the end user. It also ensures that the system meets the functional requirements.
Test Results: All the test cases mentioned above passed successfully. No defects encountered.
62. 53
8.SCREENSHOTS
To run project double click on ‘run.bat’ file to get below screen
In above screen click on ‘Upload Social Network Profiles Dataset’ button and upload dataset
63. 54
In above screen selecting and uploading ‘dataset.txt’ file and then click on ‘Open’ button to load dataset
and to get below screen
In above screen dataset loaded and displaying few records from dataset and now click on
‘Preprocess Dataset’ button to remove missing values and to split dataset into train and test part
64. 55
In above screen we can see dataset contains total 600 records and application using 480 records for
training and 120 records to test ANN and now dataset is ready and now click on ‘Run ANN Algorithm’
button to ANN algorithm
In above screen we can see ANN start iterating model generation and at each increasing epoch we can see
accuracy is getting increase and loss getting decrease.
65. 56
In above screen we can see after 200 epoch ANN got 100% accuracy and in below screen we can see
final ANN accuracy
In above screen ANN model generated and now click on ‘ANN Accuracy & Loss Graph’ button to get
below graph
66. 57
In above graph x-axis represents epoch and y-axis represents accuracy/loss value and in above graph
green line represents accuracy and blue line represents loss value and we can see accuracy was increase
from 0.90 to 1 and loss value decrease from 7 to 0.1. Now model is ready and now click on
‘Predict Fake/Genuine Profile using ANN’ button to upload test data and then ANN will predict below
result.
In above screen we are selecting and uploading ‘test.txt’ file and then click on ‘Open’ button to load test
data and to get below prediction result
67. 58
In below screen in square bracket we can see uploaded test data and after square bracket we can see
ANN prediction result as genuine or fake.
68. 59
9. CONCLUSIONS
• The Neural Network-Based Detection System holds significant potential to enhance the security of
social media platforms by effectively identifying and countering fraudulent profiles
• The goal of building a Neural Network-Based Detection System addresses the urgent need to combat
online fraud in social media.
• Its emphasis on ethical compliance and user-friendly design underscores a commitment to security
And user satisfaction, marking it as a valuable asset in the ongoing battle against fraudulent
activities online.
• This proposed hybrid technique is used to most successful classifier neural network and is also
used to improve the accuracy and reduce the time complexity of the algorithm. In proposed work
collected realtime data set of Facebook or Twitter from Facebook or Twitter users.
69. 60
10. REFERENCES
[1] Awasthi, S., Shanmugam, R., Jena, S.R. and Srivastava, A., 2020. Review of Techniques to
Prevent Fake Accounts on Social Media.
[2] Hajdu, G., Minoso, Y., Lopez, R., Acosta, M. and Elleithy, A., 2019, May. Use of Artificial
Neural Networks to Identify Fake Profiles. In 2019 IEEE Long Island Systems, Applications
and Technology Conference (LISAT) (pp. 1-4). IEEE.
[3] Kaur, J. and Sabharwal, M., 2018. Spam detection in online social networks using feed forward
neural network. In RSRI conference on recent trends in science and engineering.
[4] Khaled, S., El-Tazi, N. and Mokhtar, H.M., 2018, December. Detecting fake accounts on social
media.In 2018 IEEE International Conference on Big Data (Big Data) (pp. 3672-3681). IEEE.
[5] Meligy, A.M., Ibrahim, H.M. and Torky, M.F., 2017. Identity verification mechanism for
detecting fake profiles in online social networks. Int. J. Comput. Netw. Inf. Secur.(IJCNIS),
9(1), pp.31-39.
[6] Ramalingam, D. and Chinnaiah, V., 2018. Fake profile detection techniques in large-scale
online social networks: A comprehensive review. Computers & Electrical Engineering, 65,
pp.165-177.Wanda, P. and Jie, H.J., 2020. DeepProfile: Finding fake profile in online social
network using dynamic CNN. Journal of Information Security and Applications, 52, p.102465.
[7] Zhang, J., Dong, B. and Philip, S.Y., 2020, April. Fakedetector: Effective fake news detection
with deep diffusive neural network. In 2020 IEEE 36th International Conference on Data
Engineering (ICDE) (pp. 1826-1829). IEEE.
[8] M. Egele, G. Stringhini, and G. Vigna “Towards Detecting Compromised Accounts on
social Networks,” IEEE, vol. 5971, no. c, 2015.
[9] D. M. Freeman and T. Hwa, “Detecting Clusters of Fake Accounts in Online Social Networks
70. 61
Categories and Subject Descriptors,” AISec, 2015.
[10] M. Meligy, “Identity Verification Mechanism for Detecting Fake Profiles in Online Social
Networks,” IJCNIS, no. January, pp. 31–39, 2017.
[11] Ashraf Khalil, Hassan Hajjdiab, and Nabeel Al-Qirim , Detecting Fake Followers in Twitter:
A Machine Learning Approach, International Journal of Machine Learning and Computing,
No. 6, December 2017
[12] S. Khaled, N. El-Tazi and H. M. O. Mokhtar, "Detecting Fake Accounts on Social Media,"
2018 IEEE International Conference on Big Data (Big Data), Seattle, WA, USA, 2018, pp.