Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

Tech News

3711 Articles
article-image-game-engine-wars-unity-vs-unreal-engine
Sugandha Lahoti
11 Apr 2018
6 min read
Save for later

Game Engine Wars: Unity vs Unreal Engine

Sugandha Lahoti
11 Apr 2018
6 min read
Ready Players. One Two Three! We begin with the epic battle between the two most prominent game engines out there: Unity vs Unreal Engine. Unreal Engine has been surviving for the past 20 years, the legacy engine while Unity, relatively new, (although it’s almost 12 years) is nevertheless an equal champion. We will be evaluating these engines across 6 major factors. Without further ado, let the games begin. Unity vs Unreal Engine Performance Performance is a salient factor when it comes to evaluating a game engine’s performance. The Unreal Engine uses C++. C++ is a lower level programming language that provides developers with more control over memory management. On top of this, Unreal Engine gives developers full access to the C++ source code allowing editing and upgrading anything in the system. Unity, on the other hand, uses C#, where the memory management is out of a developer’s control. No control over memory signifies that the garbage collector can trigger at random time and ruin performance. Unreal offers an impressive range of visual effects and graphical features. More importantly, they require no external plugins (unlike Unity) to create powerful FX, terrain, cinematics, gameplay logic, animation graphs, etc. However, UE4 seems to perform various basic actions considerably slower. Actions such as starting the engine, opening the editor, opening a project, saving projects, etc take a lot of time hampering the development process. Here’s where Unity takes the edge. It is also the go-to game engine when it comes to creating mobile games. Considering the above factors we can say, in terms of sheer performance, Unreal 4 takes the lead over Unity. But Unity may be making up for this shortfall by being more in sync with the times i.e., great for creating mobile games, impressive plugins for AR etc. Also read about Unity 2D and 3D game kits to simplify game development for beginners. Learning curve and Ease of development Unity provides an exhaustive list of resources to learn from. These documentations are packed with complete descriptions complemented with a number of examples as well as video and text tutorials and live training sessions. Along with the official Unity resources, there are also high-quality third-party tutorials available. The Unreal Engine offers developers a free development license and source code but for 5% royalty. The Unreal Engine 4 has Blueprint visual scripting. These tools are designed for non-programmers and designers to create games without writing a single line of code. They feature a better-at-glance game logic creation process, where flowcharts with connections between them are used for representing the program flow. These flowcharts make games a lot faster to prototype and execute. Unity offers an Asset store for developers to help them with all aspects of design. It features a mix of animation and rigging tools, GUI generators and motion capture software. It also has powerful asset management and attributes inspection. Unity is generally seen as the more intuitive and easier to grasp game engine. Unreal Engine features a simplistic UI that doesn’t take long to get up and running. With this, we can say, that both Unity and Unreal are at par in terms of ease of use. Unity vs Unreal Engine Graphics When it comes to graphics, Unreal Engine 4 is a giant. It includes capabilities to create high-quality 2D and 3D games with state-of-the-art techniques such as particle simulations systems, deferred shading, lit translucency, post-processing features and advanced dynamic lighting. Unity is also not far behind with features such as static batching, physically-based shading, shuriken particle system, low-level rendering access etc.  Although Unreal engine comes out to be the clear winner, if you don't need to create next-gen level graphics then having something like Unreal Engine 4 may not be required, and hence Unity wins. Platform Support/compatibility Unity is a clear winner when it comes to the number of platforms supported. Here’s a list of platforms offered by both Unity and Unreal. Platform Unreal Unity iOS Available Available Android Available Available VR Available Available (also HoloLens) Linux Available Available Windows PC Available Available Mac OS X Available Available SteamOS Available Available HTML5 Available Not Available Xbox One Available Available (also Xbox 360) PS4 Available Available Windows Phone 8 Not Available Available Tizen Not Available Available Android TV and Samsung Smart TV Not Available Available Web Player Not Available Available WebGL Not Available Available PlayStation Vita Not Available Available Community Support Community support is an essential criterion for evaluating a tool’s performance, especially true for free tools. Both Unity and Unreal have large and active communities. Forums and other community sources have friendly members that are quick to respond and help out. Having said that, a larger community of game developers contribute to Unity’s asset store. This saves significant time and effort, as developers can pick out special effects, sprites, animations, etc directly from the store rather than developing them from scratch. Correspondingly, more developers share tutorials and offer tech support on Unity. Unity vs Unreal Engine Pricing Unity offers a completely free version ready for download. This is a great option if you are new to game development.  The Unity Pro version, which offers additional tools and capabilities (such as the Unity profiler) comes at $1,500 as a one-time charge, or $75/month. Unreal Engine 4, on the other hand, is completely free. There are no Pro or Free versions. However, Unreal Engine 4 has a royalty fee of 5% on resulting revenue if it exceeds $3000 per quarter. Unreal Engine 4 is also completely free for colleges and universities, although the 5% royalty is still attached. Both game engines are extremely affordable, Unity gives you access to the free version, which is still a powerful engine. Unreal Engine 4 is of course completely free. The verdict The above analysis favors Unreal as the preferred gaming engine. In reality, though, it all boils down to the game developer. Choosing the right engine really depends on the type of game you want to create, your audience, and your expertise level (such as your choice of programming language). Both these engines are evolving and changing at a rapid pace and it is for the developer to decide where they want to head. Also, check out: Unity Machine Learning Agents: Transforming Games with Artificial Intelligence Unity plugins for augmented reality application development Unity releases ML-Agents v0.3: Imitation Learning, Memory-Enhanced Agents and more
Read more
  • 0
  • 2
  • 63283

article-image-salesforce-open-sources-transmogrifai-automated-machine-learning-library
Sugandha Lahoti
17 Aug 2018
2 min read
Save for later

Salesforce Einstein team open sources TransmogrifAI, their automated machine learning library

Sugandha Lahoti
17 Aug 2018
2 min read
Salesforce has open sourced TransmogrifAI, their end-to-end automated machine learning library for structured data. This library is currently used in production to help power Salesforce Einstein AI platform. TransmogrifAI enables data scientists at Salesforce to transform customer data into meaningful, actionable predictions.  Now, they have open-sourced this project to enable other developers and data scientists to build machine learning solutions at scale, fast. TransmogrifAI is built on Scala and SparkML that automates data cleansing, feature engineering, and model selection to arrive at a performant model. It encapsulates five main components of the machine learning process: Source: Salesforce Engineering Feature Inference: TransmogrifAI allows users to specify a schema for their data to automatically extract the raw predictor and response signals as “Features”. In addition to allowing for user-specified types, TransmogrifAI also does inference of its own. The strongly-typed features allow developers to catch a majority of errors at compile-time rather than run-time. Transmogrification or automated feature engineering: TransmogrifAI comes with a myriad of techniques for all the supported feature types ranging from phone numbers, email addresses, geo-location to text data. It also optimizes the transformations to make it easier for machine learning algorithms to learn from the data. Automated Feature Validation: TransgmogrifAI has algorithms that perform automatic feature validation to remove features with little to no predictive power. These algorithms are useful when working with high dimensional and unknown data. They apply statistical tests based on feature types, and additionally, make use of feature lineage to detect and discard bias. Automated Model Selection: The TransmogrifAI Model Selector runs several different machine learning algorithms on the data and uses the average validation error to automatically choose the best one. It also automatically deals with the problem of imbalanced data by appropriately sampling the data and recalibrating predictions to match true priors. Hyperparameter Optimization: It automatically tunes hyperparameters and offers advanced tuning techniques. This large-scale automation has brought down the total time taken to train models from weeks and months to a few hours with just a few lines of code. You can check out the project to get started with TransmogrifAI. For detailed information, read the Salesforce Engineering Blog. Salesforce Spring 18 – New features to be excited about in this release! How to secure data in Salesforce Einstein Analytics How to create and prepare your first dataset in Salesforce Einstein
Read more
  • 0
  • 0
  • 53540

article-image-youtube-bans-dangerous-pranks-and-challenges
Prasad Ramesh
17 Jan 2019
2 min read
Save for later

YouTube bans dangerous pranks and challenges

Prasad Ramesh
17 Jan 2019
2 min read
YouTube updates its policies to ban dangerous pranks and challenges that can be harmful to the victim of a prank or encourages people to partake in dangerous behavior. Pranks and challenges have been around on YouTube for a long time. Many of the pranks are entertaining and harmless, some challenges potentially unsafe like an extreme food eating challenge. Recently, the “Bird Box Challenge” has been popular inspired after the Netflix movie Bird Box. The challenge is to perform difficult tasks, like driving a car, blindfolded. This challenge has received media coverage not for the entertainment value but for the dangers involved. It has caused many accidents where people take this challenge. What is banned on YouTube? In the light of this challenge being harmful and dangerous to lives, YouTube bans certain content by updating its policies page. Primarily, it has banned three kinds of pranks: Challenges that can cause serious danger to life or cause death Pranks that lead the victims to believe that they’re in serious physical danger Any pranks that cause severe emotional distress in children They state in their policies page: “YouTube is home to many beloved viral challenges and pranks, but we need to make sure what’s funny doesn’t cross the line into also being harmful or dangerous.” What are the terms? Other than the points listed above there is no clear or exhaustive list of the kind of activities that are banned. The YouTube moderators may take a call to remove a video. In the next two months, YouTube will be removing any existing content that falls into this radar, however, content creators will not receive a strike. Going forward, any new content that may have objectionable content as per their policies will get the channel a ‘strike’. Three strikes in the span of three months will lead to the channel’s termination. Questionable content includes custom thumbnails or external links that display pornographic, graphic violent, malware, or spam content. So now you are less likely to see videos on driving blindfolded or eating tide pods. Google Chrome announces an update on its Autoplay policy and its existing YouTube video annotations Is the YouTube algorithm’s promoting of #AlternativeFacts like Flat Earth having a real-world impact? Worldwide Outage: YouTube, Facebook, and Google Cloud goes down affecting thousands of users
Read more
  • 0
  • 0
  • 50715

article-image-net-team-announces-ml-net-0-6
Savia Lobo
10 Oct 2018
3 min read
Save for later

.NET team announces ML.NET 0.6

Savia Lobo
10 Oct 2018
3 min read
On Monday, .NET engineering team announced the latest monthly release of their cross-platform, open source machine learning framework for .NET developers, ML.NET 0.6. Some of the exciting features in this release include new API for building and using machine learning models, performance improvements, and much more. Improvements in the ML.NET 0.6 A new LearningPipeline API for building ML model The new API is more flexible and enables new tasks and code workflow that weren’t possible with the previous LearningPipeline API. The team further plans to deprecate the current LearningPipeline API. This new API is designed to support a wider set of scenarios. It closely follows ML principles and naming from other popular ML related frameworks like Apache Spark and Scikit-Learn. Know more about the new ML.NET API, visit the Microsoft blog. Ability to get predictions from pre-trained ONNX Models ONNX, an open and interoperable model format enables using models trained in one framework (such as scikit-learn, TensorFlow, xgboost, and so on) and use them in another (ML.NET). ML.NET 0.6 includes support for getting predictions from ONNX models. This is done by using a new transformer and runtime for scoring ONNX models. There are a large variety of ONNX models created and trained in multiple frameworks that can export models to ONNX format. Those models can be used for tasks like image classification, emotion recognition, and object detection. The ONNX transformer in ML.NET provides some data to an existing ONNX model and gets the score (prediction) from it. Performance improvements In the ML.NET 0.6 release, there are made several performance improvements in making single predictions from a trained model. Two improvements include: Moving the legacy LearningPipeline API to the new Estimators API. Optimizing the performance of PredictionFunction in the new API. Following are some comparisons of the LearningPipeline with the improved PredictionFunction in the new Estimators API: Predictions on Iris data: 3,272x speedup (29x speedup with the Estimators API, with a further 112x speedup with improvements to PredictionFunction). Predictions on Sentiment data: 198x speedup (22.8x speedup with the Estimators API, with a further 8.68x speedup with improvements to PredictionFunction). This model contains a text featurizer, so it is not surprising to see a smaller gain. Predictions on Breast Cancer data: 6,541x speedup (59.7x speedup with the Estimators API, with a further 109x speedup with improvements to PredictionFunction). Improvements in Type system In this ML.NET version, the Dv type system has been replaced with .NET’s standard type system. This makes ML.NET easy to use. ML.NET previously had its own type system, which helped it deal with missing values (a common case in ML). This type system required users to work with types like DvText, DvBool, DvInt4, etc. One effect of this change is, only floats and doubles have missing values which are represented by NaN. Due to the improved approach to dependency injection, users can also deploy ML.NET in additional scenarios using .NET app models such as Azure Functions easily without convoluted workarounds. To know more about other improvements in the ML.NET 0.6 visit the Microsoft Blog. Microsoft open sources Infer.NET, it’s popular model-based machine learning framework Neural Network Intelligence: Microsoft’s open source automated machine learning toolkit .NET Core 3.0 and .NET Framework 4.8 more details announced
Read more
  • 0
  • 0
  • 49949

article-image-ex-googler-who-quit-google-on-moral-grounds-writes-to-senate-about-companys-unethical-china-censorship-plan
Melisha Dsouza
27 Sep 2018
4 min read
Save for later

Ex-googler who quit Google on moral grounds writes to Senate about company’s “Unethical” China censorship plan

Melisha Dsouza
27 Sep 2018
4 min read
“I am part of a growing movement in the tech industry advocating for more transparency, oversight and accountability for the systems we build.” - Jack Poulson, former Google Scientist Project Dragonfly is making its rounds on the internet yet again. Jack Poulson, a former Google scientist who quit Google in September 2018, over its plan to build a censored search engine in China, has written a letter to the U.S. senators revealing new details of this project. The letter lists several details of Google's work on the Chinese search engine that had been reported but never officially confirmed by the company. He affirms that some company employees may have "actively subverted" an internal privacy review of the system. Poulson was strictly opposed to the idea of Google supporting China’s censorship on subjects by blacklisting keywords such as human rights, democracy, peaceful protest, and religion in its search engine. In protest to this project more than 1,000 employees had signed an open letter asking the company to be transparent. Many employees, including Poulson, took the drastic step of resigning from the company altogether. Now, in fear of Google’s role in violating human rights in China, Poulson has sent a letter to members of the Senate Committee on Commerce, Science, and Transportation. The letter stated that there has been "a pattern of unethical and unaccountable decision making from company leadership" at Google. He has requested Keith Enright, Google’s chief privacy officer, to respond to concerns raised by 14 leading human rights groups, who said in late August that Dragonfly could result in Google "directly contributing to, or [becoming] complicit in, human rights violations." The letter highlights a major flaw in the process of developing the Chinese search platform. He says there was "a catastrophic failure of the internal privacy review process, which one of the reviewers characterized as [having been] actively subverted." Citing anonymous sources familiar to the project, the Intercept affirms that the "catastrophic failure" Poulson mentioned, relates to an internal dispute between Google employees- those who work on privacy issues and engineers who developed the censored search system. The privacy reviewers were led to believe that the code used for developing the engine did not involve user data. After The Intercept exposed the project in early August, the privacy reviewers reviewed the code and felt that their colleagues working on Dragonfly had seriously and purposely misled them. The engine did involve user data and was designed to link users’ search queries to their personal phone number, track their internet movements, IP addresses, and information about the devices they use and the links they clicked on. Poulson told the senators that he could "directly verify" that a prototype of Dragonfly would allow a Chinese partner company to "search for a given user’s search queries based on their phone number." The code incorporates an extensive censorship blacklist developed in accordance with the Chinese government. It censors words like the English term "human rights", the Mandarin terms for 'student protest' and 'Nobel prize', and very large numbers of phrases involving 'Xi Jinping' and other members of the CCP. The engine is explicitly coded to ensure only Chinese government-approved air quality data would be returned in response to Chinese users' search. This incident takes us back to August 2018, when in an Open letter to Google CEO Sundar Pichai, the US Senator for Florida Marco Rubio led by a bipartisan group of senators, expressed his concerns over the project being  "deeply troubling" and risks making “Google complicit in human rights abuses related to China’s rigorous censorship regime”. If Google does go ahead with this project, other non-democratic nations can follow suit to demand customization of the search engine as per their rules, even if they may violate human rights. Citizens will have to think twice before leaving any internet footprint that could be traced by the government. To gain deeper insights on this news, you can head over to The Intercept. 1k+ Google employees frustrated with continued betrayal, protest against Censored Search engine project for China Skepticism welcomes Germany’s DARPA-like cybersecurity agency – The federal agency tasked with creating cutting-edge defense technology Google’s ‘mistakenly deployed experiment’ covertly activated battery saving mode on multiple phones today  
Read more
  • 0
  • 0
  • 48809

article-image-microsoft-ai-toolkit-connect-2017
Sugandha Lahoti
17 Nov 2017
3 min read
Save for later

Microsoft showcases its edgy AI toolkit at Connect(); 2017

Sugandha Lahoti
17 Nov 2017
3 min read
At the ongoing Microsoft Connect(); 2017, Microsoft has unveiled their latest innovations in AI development platforms. The Connect(); conference this year is all about developing new tools and cloud services that help developers seize the growing opportunity around artificial intelligence and machine learning. Microsoft has made two major announcements to capture the AI market. Visual Studio Tools for AI Microsoft has announced new tools for its Visual Studio IDE specific for building AI applications. Visual Studio Tools for AI is currently in the beta stage and is an extension to the Visual Studio 2017. It allows developers, data scientists, and machine learning engineers to embed deep learning models into applications. They also have built-in support for popular machine learning frameworks such as Microsoft Cognitive Toolkit (CNTK), Google TensorFlow, Caffe2, and MXNet. It also comes packed with features such as custom metrics, history tracking, enterprise-ready collaboration, and data science reproducibility and auditing. Visual Studio Tools for AI allows interactive debugging of deep learning applications with built-in features like syntax highlighting, IntelliSense and text auto formatting. Training of AI models on the cloud is also possible using the integration with Azure Machine Learning. This integration also allows deploying a model into production. Visualization and monitoring of AI models is available using TensorBoard, which is an integrated open tool and can be run both locally and in remote VMs. Azure IoT Edge Microsoft sees IoT as a mission-critical business asset. With this in mind, they have developed a product for IoT solutions. Termed as Azure IoT Edge, it enables developers to run cloud intelligence on the edge of IoT devices. Azure IoT Edge can operate on Windows and Linux as well as on multiple hardware architectures (x64 and ARM). Developers can work on languages such as C#, C and Python to deploy models on Azure IoT Edge. The Azure IoT edge is a bundle of multiple components. With AI Toolkit, developers can start building AI applications. With Azure Machine learning, AI applications can be created, deployed, and managed with the toolkit on any framework. Azure Machine Learning also includes a set of pre-built AI models for common tasks. In addition, using the Azure IoT Hub, developers can deploy Edge modules on multiple IoT Edge devices. Using a combination of Azure Machine Learning, Azure Stream Analytics, Azure Functions, and any third-party code, a complex data pipeline can be created to build and test container-based workloads. This pipeline can be managed using the Azure IoT Hub. The customer reviews on Azure IoT edge have been positive up till now. Here’s what Matt Boujonnier, Analytics Application Architect at Schneider Electric says: "Azure IoT Edge provided an easy way to package and deploy our Machine Learning applications. Traditionally, machine learning is something that has only run in the cloud, but for many IoT scenarios that isn’t good enough, because you want to run your application as close as possible to any events. Now we have the flexibility to run it in the cloud or at the edge—wherever we need it to be." With the launch of these two new tools, Microsoft is catching up quickly with the likes of Google and IBM to capture the AI market and providing developers with an intelligent edge.
Read more
  • 0
  • 0
  • 47555
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at €18.99/month. Cancel anytime
article-image-steamvr-introduces-new-controllers-for-game-developers-the-steamvr-input-system
Sugandha Lahoti
16 May 2018
2 min read
Save for later

SteamVR introduces new controllers for game developers, the SteamVR Input system

Sugandha Lahoti
16 May 2018
2 min read
SteamVR announced new controllers adding accessibility features to the Virtual reality ecosystem. The SteamVR input system, lets you build controller bindings for any game, “even for controllers that didn’t exist when the game was written”, says Valve’s Joe Ludwig in a Steam forum post. What this essentially means is that any past, present or future game can hypothetically add support for any SteamVR compatible controller. Source: Steam community Supported controllers include the XBox One gamepad, Vive Tracker, Oculus Touch, and motion controllers for HTC Vive and Windows Mixed Reality VR headsets. The key-binding system of the SteamVR input system allows users to build binding configurations. Users can adapt the controls of games to take into account user behavior such as left-handedness, a disability, or personal preference. These configurations can also be shared easily with other users of the same game via the Steam Workshop. For developers, the new SteamVR input system means easier adaptation of games to diverse controllers. Developers entirely control the default bindings for each controller type. They can also offer alternate control schemes directly without the need to change the games themselves. SteamVR Input works with every SteamVR application; it doesn’t require developers to update their app to support it. Hardware designers are also free to try more types of input, apart from Vive Tracker, Oculus Touch etc. They can expose whatever input controls exist on their device and then describe that device to the system. Most importantly, the entire mechanism is captured in an easy to use UI that is available in-headset under the Settings menu. Source: Steam community For now, SteamVR Input is in beta. Details for developers are available on the OpenVR SDK 1.0.15 page. You can also see the documentation to enable native support in your applications. Hardware developers can read the driver API documentation to see how they can enable this new system for their devices. Google open sources Seurat to bring high precision graphics to Mobile VR Oculus Go, the first stand-alone VR headset arrives! Google Daydream powered Lenovo Mirage solo hits the market
Read more
  • 0
  • 0
  • 47404

article-image-logging-the-history-of-my-past-sql-saturday-presentations-from-blog-posts-sqlservercentral
Anonymous
31 Dec 2020
3 min read
Save for later

Logging the history of my past SQL Saturday presentations from Blog Posts - SQLServerCentral

Anonymous
31 Dec 2020
3 min read
(2020-Dec-31) PASS (formerly known as the Professional Association for SQL Server) is the global community for data professionals who use the Microsoft data platform. On December 17, 2020 PASS announced that because of COVID-19, they were ceasing all operations effective January 15, 2021. PASS has offered many training and networking opportunities, one of such training streams was SQL Saturday. PASS SQL Saturday was free training events were designed to expand knowledge sharing and learning experience for data professionals. Photo by Daniil Kuželev on Unsplash Since the content and historical records of SQL Saturday soon will become unavailable, I decided to log the history of all my past SQL Saturday presentations. To create this table I give full credit to André Kamman and Rob Sewell, that extracted and saved this information here: https://sqlsathistory.com/. My SQL Saturday history Date Name Location Track Title 2016/04/16 SQLSaturday #487 Ottawa 2016 Ottawa Analytics and Visualization Excel Power Map vs. Power BI Globe Map visualization 2017/01/03 SQLSaturday #600 Chicago 2017 Addison BI Information Delivery Power BI with Narrative Science: Look Who's Talking! 2017/09/30 SQLSaturday #636 Pittsburgh 2017 Oakdale BI Information Delivery Geo Location of Twitter messages in Power BI 2018/09/29 SQLSaturday #770 Pittsburgh 2018 Oakdale BI Information Delivery Power BI with Maps: Choose Your Destination 2019/02/02 SQLSaturday #821 Cleveland 2019 Cleveland Analytics Visualization Power BI with Maps: Choose Your Destination 2019/05/10 SQLSaturday #907 Pittsburgh 2019 Oakdale Cloud Application Development Deployment Using Azure Data Factory Mapping Data Flows to load Data Vault 2019/07/20 SQLSaturday #855 Albany 2019 Albany Business Intelligence Power BI with Maps: Choose Your Destination 2019/08/24 SQLSaturday #892 Providence 2019 East Greenwich Cloud Application Development Deployment Continuous integration and delivery (CI/CD) in Azure Data Factory 2020/01/02 SQLSaturday #930 Cleveland 2020 Cleveland Database Architecture and Design Loading your Data Vault with Azure Data Factory Mapping Data Flows 2020/02/29 SQLSaturday #953 Rochester 2020 Rochester Application Database Development Loading your Data Vault with Azure Data Factory Mapping Data Flows Closing notes I think I have already told this story a couple of times. Back in 2014 - 2015, I started to attend SQL Saturday training events in the US by driving from Toronto. At that time I only spoke a few times at our local user group and had never presented at SQL Saturdays.  So while I was driving I needed to pass a custom control at the US border and a customs officer would usually ask me a set of questions about the place of my work, my citizenship, and the destination of my trip. I answered him that I was going to attend an IT conference, called SQL Saturday, a free event for data professionals. At that point, the customs officer positively challenged me and told me that I needed to start teaching others based on my long experience in IT, we laughed, and then he let me pass the border.  I’m still very thankful to that US customs officers for this positive affirmation. SQL Saturdays have been a great journey for me! The post Logging the history of my past SQL Saturday presentations appeared first on SQLServerCentral.
Read more
  • 0
  • 0
  • 46408

article-image-sherin-thomas-explains-how-to-build-a-pipeline-in-pytorch-for-deep-learning-workflows
Packt Editorial Staff
09 May 2019
8 min read
Save for later

Sherin Thomas explains how to build a pipeline in PyTorch for deep learning workflows

Packt Editorial Staff
09 May 2019
8 min read
A typical deep learning workflow starts with ideation and research around a problem statement, where the architectural design and model decisions come into play. Following this, the theoretical model is experimented using prototypes. This includes trying out different models or techniques, such as skip connection, or making decisions on what not to try out. PyTorch was started as a research framework by a Facebook intern, and now it has grown to be used as a research or prototype framework and to write an efficient model with serving modules. The PyTorch deep learning workflow is fairly equivalent to the workflow implemented by almost everyone in the industry, even for highly sophisticated implementations, with slight variations. In this article, we explain the core of ideation and planning, design and experimentation of the PyTorch deep learning workflow. This article is an excerpt from the book PyTorch Deep Learning Hands-On by Sherin Thomas and Sudhanshi Passi. This book attempts to provide an entirely practical introduction to PyTorch. This PyTorch publication has numerous examples and dynamic AI applications and demonstrates the simplicity and efficiency of the PyTorch approach to machine intelligence and deep learning. Ideation and planning Usually, in an organization, the product team comes up with a problem statement for the engineering team, to know whether they can solve it or not. This is the start of the ideation phase. However, in academia, this could be the decision phase where candidates have to find a problem for their thesis. In the ideation phase, engineers brainstorm and find the theoretical implementations that could potentially solve the problem. In addition to converting the problem statement to a theoretical solution, the ideation phase is where we decide what the data types are and what dataset we should use to build the proof of concept (POC) of the minimum viable product (MVP). Also, this is the stage where the team decides which framework to go with by analyzing the behavior of the problem statement, available implementations, available pretrained models, and so on. This stage is very common in the industry, and I have come across numerous examples where a well-planned ideation phase helped the team to roll out a reliable product on time, while a non-planned ideation phase destroyed the whole product creation. Design and experimentation The crucial part of design and experimentation lies in the dataset and the preprocessing of the dataset. For any data science project, the major timeshare is spent on data cleaning and preprocessing. Deep learning is no exception from this. Data preprocessing is one of the vital parts of building a deep learning pipeline. Usually, for a neural network to process, real-world datasets are not cleaned or formatted. Conversion to floats or integers, normalization and so on, is required before further processing. Building a data processing pipeline is also a non-trivial task, which consists of writing a lot of boilerplate code. For making it much easier, dataset builders and DataLoader pipeline packages are built into the core of PyTorch. The dataset and DataLoader classes Different types of deep learning problems require different types of datasets, and each of them might require different types of preprocessing depending on the neural network architecture we use. This is one of the core problems in deep learning pipeline building. Although the community has made the datasets for different tasks available for free, writing a preprocessing script is almost always painful. PyTorch solves this problem by giving abstract classes to write custom datasets and data loaders. The example given here is a simple dataset class to load the fizzbuzz dataset, but extending this to handle any type of dataset is fairly straightforward. PyTorch's official documentation uses a similar approach to preprocess an image dataset before passing that to a complex convolutional neural network (CNN) architecture. A dataset class in PyTorch is a high-level abstraction that handles almost everything required by the data loaders. The custom dataset class defined by the user needs to override the __len__ and __getitem__ functions of the parent class, where __len__ is being used by the data loaders to determine the length of the dataset and __getitem__ is being used by the data loaders to get the item. The __getitem__ function expects the user to pass the index as an argument and get the item that resides on that index: from dataclasses import dataclassfrom torch.utils.data import Dataset, DataLoader@dataclass(eq=False)class FizBuzDataset(Dataset):    input_size: int    start: int = 0    end: int = 1000    def encoder(self,num):        ret = [int(i) for i in '{0:b}'.format(num)]        return[0] * (self.input_size - len(ret)) + ret    def __getitem__(self, idx):        x = self.encoder(idx)        if idx % 15 == 0:            y = [1,0,0,0]        elif idx % 5 ==0:            y = [0,1,0,0]        elif idx % 3 == 0:            y = [0,0,1,0]        else:            y = [0,0,0,1]        return x,y           def __len__(self):        return self.end - self.start The implementation of a custom dataset uses brand new dataclasses from Python 3.7. dataclasses help to eliminate boilerplate code for Python magic functions, such as __init__, using dynamic code generation. This needs the code to be type-hinted and that's what the first three lines inside the class are for. You can read more about dataclasses in the official documentation of Python (https://docs.python.org/3/library/dataclasses.html). The __len__ function returns the difference between the end and start values passed to the class. In the fizzbuzz dataset, the data is generated by the program. The implementation of data generation is inside the __getitem__ function, where the class instance generates the data based on the index passed by DataLoader. PyTorch made the class abstraction as generic as possible such that the user can define what the data loader should return for each id. In this particular case, the class instance returns input and output for each index, where, input, x is the binary-encoder version of the index itself and output is the one-hot encoded output with four states. The four states represent whether the next number is a multiple of three (fizz), or a multiple of five (buzz), or a multiple of both three and five (fizzbuzz), or not a multiple of either three or five. Note: For Python newbies, the way the dataset works can be understood by looking first for the loop that loops over the integers, starting from zero to the length of the dataset (the length is returned by the __len__ function when len(object) is called). The following snippet shows the simple loop: dataset = FizBuzDataset()for i in range(len(dataset)):    x, y = dataset[i]dataloader = DataLoader(dataset, batch_size=10, shuffle=True,                     num_workers=4)for batch in dataloader:    print(batch) The DataLoader class accepts a dataset class that is inherited from torch.utils.data.Dataset. DataLoader accepts dataset and does non-trivial operations such as mini-batching, multithreading, shuffling, and so on, to fetch the data from the dataset. It accepts a dataset instance from the user and uses the sampler strategy to sample data as mini-batches. The num_worker argument decides how many parallel threads should be operating to fetch the data. This helps to avoid a CPU bottleneck so that the CPU can catch up with the GPU's parallel operations. Data loaders allow users to specify whether to use pinned CUDA memory or not, which copies the data tensors to CUDA's pinned memory before returning it to the user. Using pinned memory is the key to fast data transfers between devices, since the data is loaded into the pinned memory by the data loader itself, which is done by multiple cores of the CPU anyway. Most often, especially while prototyping, custom datasets might not be available for developers and in such cases, they have to rely on existing open datasets. The good thing about working on open datasets is that most of them are free from licensing burdens, and thousands of people have already tried preprocessing them, so the community will help out. PyTorch came up with utility packages for all three types of datasets with pretrained models, preprocessed datasets, and utility functions to work with these datasets. This article is about how to build a basic pipeline for deep learning development. The system we defined here is a very common/general approach that is followed by different sorts of companies, with slight changes. The benefit of starting with a generic workflow like this is that you can build a really complex workflow as your team/project grows on top of it. Build deep learning workflows and take deep learning models from prototyping to production with PyTorch Deep Learning Hands-On written by Sherin Thomas and Sudhanshu Passi. F8 PyTorch announcements: PyTorch 1.1 releases with new AI tools, open sourcing BoTorch and Ax, and more Facebook AI open-sources PyTorch-BigGraph for faster embeddings in large graphs Top 10 deep learning frameworks
Read more
  • 0
  • 0
  • 44721

article-image-google-bristlecone-a-new-quantum-processor-by-googles-quantum-ai-lab
Sugandha Lahoti
06 Mar 2018
2 min read
Save for later

Google Bristlecone: A New Quantum processor by Google’s Quantum AI lab

Sugandha Lahoti
06 Mar 2018
2 min read
The quest to conquer the Quantum World is rapidly advancing! Another contender in this conquest is Google, who has launched the preview of Bristlecone, a new Quantum Processor. Google’s Bristlecone was unveiled at the annual American Physical Society meeting in Los Angeles on March 5, 2018. According to Google, “Bristlecone would be a compelling proof-of-principle for building larger scale quantum computers.” The purpose of this quantum processor is to provide a testbed for research into system error rates and scalability of Google’s qubit technology along with applications in quantum simulation, optimization, and machine learning. A Preview of Bristlecone, Google’s New Quantum Processor. On the right, is a cartoon of the device: each “X” represents a qubit, with nearest neighbor connectivity. Google Bristlecone uses a new architecture that allows 72 quantum bits on a single array with an overlapping design that puts two different grids together. Google has optimized Bristlecone for the lowest possible error rate using a specialized process called Quantum Error Correction. The previous 9-qubit linear quantum computers by Google demonstrated error rates of 1% readout, 0.1% single-qubit gates and 0.6% two-qubit gates. Google Bristlecone uses the same scheme for coupling, control, and readout, but is scaled to a square array of 72 qubits. Google researchers chose a device of this size to demonstrate quantum supremacy in the future, to investigate first and second order error-correction using the surface code, and to facilitate quantum algorithm development on actual hardware. The intended research direction of the Quantum AI Lab is to access near-term applications on the road to building an error corrected quantum computer. For this, Google says, “would require harmony between a full stack of technology ranging from software and control electronics to the processor itself. Getting this right requires careful systems engineering over several iterations.” More information about Google Bristlecone is available in the Google research blog.
Read more
  • 0
  • 0
  • 42812
article-image-nvtop-an-htop-like-monitoring-tool-for-nvidia-gpus-on-linux
Prasad Ramesh
09 Oct 2018
2 min read
Save for later

NVTOP: An htop like monitoring tool for NVIDIA GPUs on Linux

Prasad Ramesh
09 Oct 2018
2 min read
People started using htop when the top just didn’t provide enough information. Now there is NVTOP, a tool that looks similar to htop but displays the process information loaded on your NVIDIA GPU. It works on Linux systems and displays detailed information about processes, memory used, which GPU and also displays the total GPU and memory usage. The first version of this tool was released in July last year. The latest change made the process list and command options scrollable. Some of the features of NVTOP are: Sorting by column To Select / Ignore a specific GPU by ID To kill selected process Monochrome option Yes, it has multi GPU support and can display the running processes from all of your GPUs. The information printed out looks like the following, and is similar to something htop would display. Source: GitHub There is also a manual page to give some guidance in using NVTOP. It can be accessed with this command: man nvtop There are OS specific installation steps on GitHub for Ubuntu/Debian, Fedora/RedHat/CentOS, OpenSUSE, and Arch Linux. Requirements There are two libraries needed to build and run NVTOP: The NVIDIA Management Library (NVML) for querying GPU information. The ncurses library for the user interface and make it colorful. Supported GPUs The NVTOP tool works only for NVIDIA GPUs and runs on Linux systems. One of the dependencies is the NVML library which does not support some queries from GPUs before the Kepler microarchitecture. That is anything before GeForce 600 series, GeForce 700 series, or GeForce 800M wouldn’t likely work. For AMD users, there is a tool called radeontop. The tool is provided under the GPLV3 license. For more details, head on to the NVTOP GitHub repository. NVIDIA leads the AI hardware race. But which of its GPUs should you use for deep learning? NVIDIA announces pre-orders for the Jetson Xavier Developer Kit, an AI chip for autonomous machines, at $2,499 NVIDIA open sources its material definition language, MDL SDK
Read more
  • 0
  • 0
  • 42042

article-image-hope-from-blog-posts-sqlservercentral
Anonymous
31 Dec 2020
2 min read
Save for later

Hope! from Blog Posts - SQLServerCentral

Anonymous
31 Dec 2020
2 min read
2020 was a rough year. We’ve had friends and family leave us. Jobs lost. Health scares a plenty and that’s without counting a global pandemic. The end of PASS. US politics has been .. nail biting to say the very least. All around it’s just been a tough year. On the other hand, I’m still alive, and if you are reading this so are you. There are vaccines becoming available for Covid and it looks like the US government may not try to kill us all off in 2021. Several people I know have had babies! I’ve lost over 50 lbs! (Although I absolutely do not recommend my methods.) Microsoft is showing it’s usual support for the SQL Server community and the community itself is rallying together and doing everything they can to salvage resources from PASS. And we are still and always a community that thrives on supporting each other. 2020 was a difficult year. But there is always, that most valuable thing. Hope. A singer/songwriter I follow on youtube did a 2020 year in review song. It’s worth watching just for her amazing talent and beautiful voice, but at about 4:30 she makes a statement that really resonated with me. There’s life in between the headlines and fear.The little victories made this year.No matter what happens we keep doing good. -Is that all we have?Yes and we always should!There’s nothing you can’t overcome. https://www.youtube.com/watch?v=z9xwXJvXBIw So for this new year I wish all of you that most precious of gifts. Hope. The post Hope! appeared first on SQLServerCentral.
Read more
  • 0
  • 0
  • 40974

article-image-google-announce-the-largest-overhaul-of-their-cloud-speech-to-text-api
Vijin Boricha
20 Apr 2018
2 min read
Save for later

Google announce the largest overhaul of their Cloud Speech-to-Text

Vijin Boricha
20 Apr 2018
2 min read
Last month Google announced Cloud Text-to-Speech, their speech synthesis API that features DeepMind and WaveNet models. Now, they have announced their largest overhaul of Cloud Speech-to-Text (formerly known as Cloud Speech API) since it was introduced in 2016. Google’s Speech-to-Text API has been enhanced for business use cases, including phone-call and video transcription. With this new Cloud Speech-to-Text update one can get access to the latest research from Google’s machine learning expert team, all via a simple REST API. It also supports Standard service level agreement (SLA) with 99.9% availability. Here’s a sneak peek into the latest updates to Google’s Cloud Speech-to-Text API: New video and phone call transcription models: Google has added models created for specific use cases such as phone call transcriptions and transcriptions of audio from video.Video and phone call transcription models Readable text with automatic punctuation: Google created a new LSTM neural network to improve automating punctuation in long-form speech transcription. This Cloud Speech-to-Text model, currently in beta, can automatically suggest commas, question marks, and periods for your text. Use case description with recognition metadata: The information taken from transcribed audio or video with tags such as ‘voice commands to a Google home assistant’ or ‘soccer sport tv shows’, is aggregated across Cloud Speech-to-Text users to prioritize upcoming activities. To know more about this update in detail visit Google’s blog post.
Read more
  • 0
  • 0
  • 40960
article-image-2020-was-certainly-a-year-on-the-calendar-from-blog-posts-sqlservercentral
Anonymous
30 Dec 2020
1 min read
Save for later

2020 was certainly a year on the calendar from Blog Posts - SQLServerCentral

Anonymous
30 Dec 2020
1 min read
According to my blog post schedule, this is the final post of the year. It’s nothing more than a coincidence, but making it through the worst year in living memory could also be considered a sign. While it’s true that calendars are arbitrary, Western tradition says this is the end of one more cycle, so let’s-> Continue reading 2020 was certainly a year on the calendar The post 2020 was certainly a year on the calendar appeared first on Born SQL. The post 2020 was certainly a year on the calendar appeared first on SQLServerCentral.
Read more
  • 0
  • 0
  • 40877

article-image-top-hacks-it-certification
Ronnie Wong
14 Oct 2021
5 min read
Save for later

Top life hacks for prepping for your IT certification exam

Ronnie Wong
14 Oct 2021
5 min read
I remember deciding to pursue my first IT certification, the CompTIA A+. I had signed up for a class that lasted one week, per exam, meaning two weeks.  We reviewed so much material during that time that the task of preparing for the certification seemed overwhelming.  Even with an instructor, the scope of the material was a challenge.   Mixed messages  Somedays, I would hear from others how difficult the exam was; on other days, I would hear how easy the exam was. I would also hear advice about topics I should study more and even some topics I didn’t think about studying.  These conflicting comments only increased my anxiety as my exam date drew closer. No matter what I read, studied, or heard from people about the exam, I felt like I was not prepared to pass it. Overwhelmed by the sheer volume of material, anxious from the comments of others and feeling like I didn’t do enough preparation when I finally passed the exam, it didn’t bring me joy so much as relief that I had survived it.   Then it was time to prepare for the second exam, and those same feelings came back but this time with a little more confidence that I could pass it. After that first A+ exam, I have not only passed more exams, I have also have helped others prepare successfully for many certification exams.    Exam hacks  Below is a list that has helped not only me but also others to successfully prepare for exams.   Start with the exam objectives and keep a copy of them close by you for reference during your whole preparation time.  If you haven’t downloaded them (many are on the exam vendor’s site), do it now.  This is your verified guide on what topics will appear on the exam, and it will help you feel confident to ignore others when they tell you what to study. If it’s not in the exam objectives, then it is more than likely not on the exam. There is never a 100% guarantee, but whatever they ask you will at least be related to those topics found on the objectives. They will not be in addition to the objectives.                                                                                                                                                                                                              To sharpen the focus of your preparation, refer to your exam objectives again.  You may see this as just a list, but it is so much more. Put differently, the exam objectives set the scope of what to study.  How?  Pay attention to the verbs used on the exam objectives.  The objectives never give you a topic without using a verb to help you recognize the depth you should go into when you study. e.g., “configure and verify HSRP.”  You are not only learning what HSRP is, but you should know where and how to configure and verify it working successfully.  If it reads to “describe the hacking process”, you will know this topic is more conceptual. A conceptual topic with that verb would require you to define it and put it in context.                                                                                                                                                                                        The exam objectives also show the weighting of those topics for the exam. Vendors break down the objective domain into percentages. For example, you may find one topic accounts for 40% of the exam. This helps you predict what topics you will see more questions for on the exam. That means you can know what topics you’re more likely to see than other topics.  You may also see that you already know a good percentage of the exam as well. It’s a confidence booster and that mindset is key in your preparation.                                                                                                                                    A good study session begins and ends with a win. You can easily sabotage your study by picking a topic that is too difficult to get through in a single session. In the same manner, ending a study session where you feel like you didn’t learn anything is also disheartening.  This is demotivating at best.  How do we ensure that we can begin and end a study session with win? Create a study session with three topics. Begin with an easier topic to review or learn. Then, you can choose a topic that is more challenging.  Of course, end your study session with another easier topic.  Following this model, do a minimum of one a day or maximum of two sessions a day.                            Put your phone away. Set your emails and notifications, instant messaging, and social media on do not disturb during your study session time. Good study time is uninterrupted, except on your very specific and short breaks. It’s amazing how much more you can accomplish when you have dedicated study time away from beeps, rings, notifications.     Prep is king  Preparing for a certification exam is hard enough due to the quantity of material and the added stress of sitting for an exam and passing. You can make it more effective by using the objectives to help guide you, putting a session plan in place that is motivating as well as reducing the distractions during your dedicated study times. These are commonly overlooked preparation hacks that will benefit you in your next certification exam.   These are just some handy hints for passing IT Certification exams. What tips would you give? Have you recently completed a certification or are you planning on taking one soon?  Packt would love to hear your thoughts, so why not take the following survey? The first 200 respondents will get a free ebook of choice from the Packt catalogue.*    *To receive the ebook, you must supply an email. Free ebook requires a no-charge account creation with Packt   
Read more
  • 0
  • 0
  • 40860