[go: up one dir, main page]

Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

Tech Guides - Artificial Intelligence

170 Articles
article-image-what-is-pytorch-and-how-does-it-work
Sunith Shetty
18 Sep 2018
7 min read
Save for later

What is PyTorch and how does it work?

Sunith Shetty
18 Sep 2018
7 min read
PyTorch is a Python-based scientific computing package that uses the power of graphics processing units. It is also one of the preferred deep learning research platforms built to provide maximum flexibility and speed. It is known for providing two of the most high-level features; namely, tensor computations with strong GPU acceleration support and building deep neural networks on a tape-based autograd systems. There are many existing Python libraries which have the potential to change how deep learning and artificial intelligence are performed, and this is one such library. One of the key reasons behind PyTorch’s success is it is completely Pythonic and one can build neural network models effortlessly. It is still a young player when compared to its other competitors, however, it is gaining momentum fast. A brief history of PyTorch Since its release in January 2016, many researchers have continued to increasingly adopt PyTorch. It has quickly become a go-to library because of its ease in building extremely complex neural networks. It is giving a tough competition to TensorFlow especially when used for research work. However, there is still some time before it is adopted by the masses due to its still “new” and “under construction” tags. PyTorch creators envisioned this library to be highly imperative which can allow them to run all the numerical computations quickly. This is an ideal methodology which fits perfectly with the Python programming style. It has allowed deep learning scientists, machine learning developers, and neural network debuggers to run and test part of the code in real time. Thus they don’t have to wait for the entire code to be executed to check whether it works or not. You can always use your favorite Python packages such as NumPy, SciPy, and Cython to extend PyTorch functionalities and services when required. Now you might ask, why PyTorch? What’ so special in using it to build deep learning models? The answer is quite simple, PyTorch is a dynamic library (very flexible and you can use as per your requirements and changes) which is currently adopted by many of the researchers, students, and artificial intelligence developers. In the recent Kaggle competition, PyTorch library was used by nearly all of the top 10 finishers. Some of the key highlights of PyTorch includes: Simple Interface: It offers easy to use API, thus it is very simple to operate and run like Python. Pythonic in nature: This library, being Pythonic, smoothly integrates with the Python data science stack. Thus it can leverage all the services and functionalities offered by the Python environment. Computational graphs: In addition to this, PyTorch provides an excellent platform which offers dynamic computational graphs, thus you can change them during runtime. This is highly useful when you have no idea how much memory will be required for creating a neural network model. PyTorch Community PyTorch community is growing in numbers on a daily basis. In the just short year and a half, it has shown some great amount of developments that have led to its citations in many research papers and groups. More and more people are bringing PyTorch within their artificial intelligence research labs to provide quality driven deep learning models. The interesting fact is, PyTorch is still in early-release beta, but the way everyone is adopting this deep learning framework at a brisk pace shows its real potential and power in the community. Even though it is in the beta release, there are 741 contributors on the official GitHub repository working on enhancing and providing improvements to the existing PyTorch functionalities. PyTorch doesn’t limit to specific applications because of its flexibility and modular design. It has seen heavy use by leading tech giants such as Facebook, Twitter, NVIDIA, Uber and more in multiple research domains such as NLP, machine translation, image recognition, neural networks, and other key areas. Why use PyTorch in research? Anyone who is working in the field of deep learning and artificial intelligence has likely worked with TensorFlow before, Google’s most popular open source library. However, the latest deep learning framework - PyTorch solves major problems in terms of research work. Arguably PyTorch is TensorFlow’s biggest competitor to date, and it is currently a much favored deep learning and artificial intelligence library in the research community. Dynamic Computational graphs It avoids static graphs that are used in frameworks such as TensorFlow, thus allowing the developers and researchers to change how the network behaves on the fly. The early adopters are preferring PyTorch because it is more intuitive to learn when compared to TensorFlow. Different back-end support PyTorch uses different backends for CPU, GPU and for various functional features rather than using a single back-end. It uses tensor backend TH for CPU and THC for GPU. While neural network backends such as THNN and THCUNN for CPU and GPU respectively. Using separate backends makes it very easy to deploy PyTorch on constrained systems. Imperative style PyTorch library is specially designed to be intuitive and easy to use. When you execute a line of code, it gets executed thus allowing you to perform real-time tracking of how your neural network models are built. Because of its excellent imperative architecture and fast and lean approach it has increased overall PyTorch adoption in the community. Highly extensible PyTorch is deeply integrated with the C++ code, and it shares some C++ backend with the deep learning framework, Torch. Thus allowing users to program in C/C++ by using an extension API based on cFFI for Python and compiled for CPU for GPU operation. This feature has extended the PyTorch usage for new and experimental use cases thus making them a preferable choice for research use. Python-Approach PyTorch is a native Python package by design. Its functionalities are built as Python classes, hence all its code can seamlessly integrate with Python packages and modules. Similar to NumPy, this Python-based library enables GPU-accelerated tensor computations plus provides rich options of APIs for neural network applications. PyTorch provides a complete end-to-end research framework which comes with the most common building blocks for carrying out everyday deep learning research. It allows chaining of high-level neural network modules because it supports Keras-like API in its torch.nn package. PyTorch 1.0: The path from research to production We have been discussing all the strengths PyTorch offers, and how these make it a go-to library for research work. However, one of the biggest downsides is, it has been its poor production support. But this is expected to change soon. PyTorch 1.0 is expected to be a major release which will overcome the challenges developers face in production. This new iteration of the framework will merge Python-based PyTorch with Caffe2 allowing machine learning developers and deep learning researchers to move from research to production in a hassle-free way without the need to deal with any migration challenges. The new version 1.0 will unify research and production capabilities in one framework thus providing the required flexibility and performance optimization for research and production. This new version promises to handle tasks one has to deal with while running the deep learning models efficiently on a massive scale. Along with the production support, PyTorch 1.0 will have more usability and optimization improvements. With PyTorch 1.0, your existing code will continue to work as-is, there won’t be any changes to the existing API. If you want to stay updated with all the progress to PyTorch library, you can visit the Pull Requests page. The beta release of this long-awaited version is expected later this year. Major vendors like Microsoft and Amazon are expected to provide complete support to the framework across their cloud products. Summing up, PyTorch is a compelling player in the field of deep learning and artificial intelligence libraries, exploiting its unique niche of being a research-first library. It overcomes all the challenges and provides the necessary performance to get the job done. If you’re a mathematician, researcher, student who is inclined to learn how deep learning is performed, PyTorch is an excellent choice as your first deep learning framework to learn. Read more Can a production-ready Pytorch 1.0 give TensorFlow a tough time? A new geometric deep learning extension library for Pytorch releases! Top 5 tools for reinforcement learning
Read more
  • 0
  • 0
  • 102423

article-image-5-key-reinforcement-learning-principles-explained-by-ai-expert
Packt Editorial Staff
10 Dec 2019
10 min read
Save for later

5 key reinforcement learning principles explained by AI expert, Hadelin de Ponteves

Packt Editorial Staff
10 Dec 2019
10 min read
When people refer to artificial intelligence, some think of it as machine learning, while others think of it as deep learning or reinforcement learning, etc. While artificial intelligence is a broad term which involves machine learning, reinforcement learning is a type of machine learning, thereby a branch of AI. In this article we will understand 5 key reinforcement learning principles with some simple examples. Reinforcement learning allows machines and software agents to automatically determine the ideal behavior within a specific context, in order to maximize its performance. It is employed by various software and machines to find the best possible behavior or path it should take in a specific situation. This article is an excerpt from the book AI Crash Course written by Hadelin de Ponteves. In this book Hadelin helps you understand what you really need to build AI systems with reinforcement learning. The book involves descriptive and practical projects to put ideas into action and show how to build intelligent software step by step. While reinforcement learning in some way is a form of AI, machine learning does not include the process of taking action and interacting with an environment like we humans do. Indeed, as intelligent human beings, what we constantly keep doing is the following: We observe some input, whether it's what we see with our eyes, what we hear with our ears, or what we remember in our memory. These inputs are then processed in our brain. Eventually, we make decisions and take actions. This process of interacting with an environment is what we are trying to reproduce in terms of artificial intelligence. And to that extent, the branch of AI that works on this is reinforcement learning. This is the closest match to the way we think; the most advanced form of artificial intelligence, if we see AI as the science that tries to mimic (or surpass) human intelligence. Reinforcement learning principles also has the most impressive results in business applications of AI. For example, Alibaba leveraged reinforcement learning to increase its ROI in online advertising by 240% without increasing their advertising budget. Five reinforcement learning principles Let's begin building the first pillars of your intuition into how reinforcement learning works. These are the fundamental reinforcement learning principles, which will get you started with the right, solid basics in AI. Here are the five principles: Principle #1: The input and output system Principle #2: The reward Principle #3: The AI environment Principle #4: The Markov decision process Principle #5: Training and inference Principle #1 – The input and output system The first step is to understand that today, all AI models are based on the common principle of input and output. Every single form of artificial intelligence, including machine learning models, chatBots, recommender systems, robots, and of course reinforcement learning models, will take something as input, and will return another thing as output. Figure 1: The input and output system In reinforcement learning, this input and output have a specific name: the input is called the state, or input state. The output is the action performed by the AI. And in the middle, we have nothing other than a function that takes a state as input and returns an action as output. That function is called a policy. Remember the name, "policy," because you will often see it in AI literature. As an example, consider a self-driving car. Try to imagine what the input and output would be in that case. The input would be what the embedded computer vision system sees, and the output would be the next move of the car: accelerate, slow down, turn left, turn right, or brake. Note that the output at any time (t) could very well be several actions performed at the same time. For instance, the self-driving car can accelerate while at the same time turning left. In the same way, the input at each time (t) can be composed of several elements: mainly the image observed by the computer vision system, but also some parameters of the car such as the current speed, the amount of gas remaining in the tank, and so on. That's the very first important principle in artificial intelligence: it is an intelligent system (a policy) that takes some elements as input, does its magic in the middle, and returns some actions to perform as output. Remember that the inputs are also called the states. Principle #2 – The reward Every AI has its performance measured by a reward system. There's nothing confusing about this; the reward is simply a metric that will tell the AI how well it does over time. The simplest example is a binary reward: 0 or 1. Imagine an AI that has to guess an outcome. If the guess is right, the reward will be 1, and if the guess is wrong, the reward will be 0. This could very well be the reward system defined for an AI; it really can be as simple as that! A reward doesn't have to be binary, however. It can be continuous. Consider the famous game of Breakout: Figure 2: The Breakout game Imagine an AI playing this game. Try to work out what the reward would be in that case. It could simply be the score; more precisely, the score would be the accumulated reward over time in one game, and the rewards could be defined as the derivative of that score. This is one of the many ways we could define a reward system for that game. Different AIs will have different reward structures; we will build five rewards systems for five different real-world applications in this book. With that in mind, remember this as well: the ultimate goal of the AI will always be to maximize the accumulated reward over time. Those are the first two basic, but fundamental, principles of artificial intelligence as it exists today; the input and output system, and the reward. Principle #3 – AI environment The third reinforcement learning principles involves an "AI environment." It is a very simple framework where you will define three things at each time (t): The input (the state) The output (the action) The reward (the performance metric) For each and every single AI based on reinforcement learning that is built today, we always define an environment composed of the preceding elements. It is, however, important to understand that there are more than these three elements in a given AI environment. For example, if you are building an AI to beat a car racing game, the environment will also contain the map and the gameplay of that game. Or, in the example of a self-driving car, the environment will also contain all the roads along which the AI is driving and the objects that surround those roads. But what you will always find in common when building any AI, are the three elements of state, action, and reward. Principle #4 – The Markov decision process The Markov decision process, or MDP, is simply a process that models how the AI interacts with the environment over time. The process starts at t = 0, and then, at each next iteration, meaning at t = 1, t = 2, … t = n units of time (where the unit can be anything, for example, 1 second), the AI follows the same format of transition: The AI observes the current state, st The AI performs the action, at The AI receives the reward, rt = R(st,at) The AI enters the following state, st+1 The goal of the AI is always the same in reinforcement learning: it is to maximize the accumulated rewards over time, that is, the sum of all the rt = R(st,at) received at each transition. received at each transition. The following graphic will help you visualize and remember an MDP better, the basis of reinforcement learning models: Figure 3: The Markov Decision process Now four essential pillars are already shaping your intuition of AI. Adding a last important one completes the foundation of your understanding of AI. The last principle is training and inference; in training, the AI learns, and in inference, it predicts. Principle #5 – Training and inference The final principle you must understand is the difference between training and inference. When building an AI, there is a time for the training mode, and a separate time for the inference mode. I'll explain what that means starting with the training mode. Training mode Now you understand, from the three first principles, that the very first step of building an AI is to build an environment in which the input states, the output actions, and a system of rewards are clearly defined. From the fourth principle, you also understand that inside this environment an AI will be built that interacts with it, trying to maximize the total reward accumulated over time. To put it simply, there will be a preliminary (and long) period during which the AI will be trained to do that. That period is called the training; we can also say that the AI is in training mode. During that time, the AI tries to accomplish a certain goal repeatedly until it succeeds. After each attempt, the parameters of the AI model are modified in order to do better at the next attempt. Inference mode Inference mode simply comes after your AI is fully trained and ready to perform well. It will simply consist of interacting with the environment by performing the actions to accomplish the goal the AI was trained to achieve before in training mode. In inference mode, no parameters are modified at the end of each episode. For example, imagine you have an AI company that builds customized AI solutions for businesses, and one of your clients asked you to build an AI to optimize the flows in a smart grid. First, you'd enter an R&D phase during which you would train your AI to optimize these flows (training mode), and as soon as you reached a good level of performance, you'd deliver your AI to your client and go into production. Your AI would regulate the flows in the smart grid only by observing the current states of the grid and performing the actions it has been trained to do. That's inference mode. Sometimes, the environment is subject to change, in which case you must alternate fast between training and inference modes so that your AI can adapt to the new changes in the environment. An even better solution is to train your AI model every day and go into inference mode with the most recently trained model. That was the last fundamental principle common to every AI. To summarize, we explored the five key reinforcement learning principles which involves the input and output system, a reward system, AI environment, Markov decision process, training and inference mode for AI. Get this guide AI Crash Course by Hadelin de Ponteves today to learn about programming an AI software in Python without any math or data science background. It will also help you master the key skills of deep learning, reinforcement learning, and deep reinforcement learning. How artificial intelligence and machine learning can help us tackle the climate change emergency DeepMind introduces OpenSpiel, a reinforcement learning-based framework for video games OpenAI’s AI robot hand learns to solve a Rubik Cube using Reinforcement learning and Automatic Domain Randomization (ADR) DeepMind’s AI uses reinforcement learning to defeat humans in multiplayer games
Read more
  • 0
  • 0
  • 100402

article-image-write-python-code-or-pythonic-code
Aaron Lazar
08 Aug 2018
5 min read
Save for later

Do you write Python Code or Pythonic Code?

Aaron Lazar
08 Aug 2018
5 min read
If you’re new to Programming, and Python in particular, you might have heard the term Pythonic being brought up at tech conferences, meetups and even at your own office. You might have also wondered why the term and whether they’re just talking about writing Python code. Here we’re going to understand what the term Pythonic means and why you should be interested in learning how to not just write Python code, rather write Pythonic code. What does Pythonic mean? When people talk about pythonic code, they mean that the code uses Python idioms well, that it’s natural or displays fluency in the language. In other words, it means the most widely adopted idioms that are adopted by the Python community. If someone said you are writing un-pythonic code, they might actually mean that you are attempting to write Java/C++ code in Python, disregarding the Python idioms and performing a rough transcription rather than an idiomatic translation from the other language. Okay, now that you have a theoretical idea of what Pythonic (and unpythonic) means, let’s have a look at some Pythonic code in practice. Writing Pythonic Code Before we get into some examples, you might be wondering if there’s a defined way/method of writing Pythonic code. Well, there is, and it’s called PEP 8. It’s the official style guide for Python. Example #1 x=[1, 2, 3, 4, 5, 6] result = [] for idx in range(len(x)); result.append(x[idx] * 2) result Output: [2, 4, 6, 8, 10, 12] Consider the above code, where you’re trying to multiply some elements, “x” by 2. So, what we did here was, we created an empty list to store the results. We would then append the solution of the computation into the result. The result now contains a function which is 2 multiplied by each of the elements. Now, if you were to write the same code in a Pythonic way, you might want to simply use list comprehensions. Here’s how: x=[1, 2, 3, 4, 5, 6] [(element * 2) for element in x] Output: [2, 4, 6, 8, 10] You might have noticed, we skipped the entire for loop! Example #2 Let’s make the previous example a bit more complex, and place a condition that the elements should be multiplied by 2 only if they are even. x=[1, 2, 3, 4, 5, 6, 7, 8, 9, 10] result = [] for idx in range(len(x)); if x[idx] % 2 == 0; result.append(x[idx] * 2) else; result.append(x[idx]) result Output: [1, 4, 3, 8, 5, 12, 7, 16, 9, 20] We’ve actually created an if else statement to solve this problem, but there is a simpler way of doing things the Pythonic way. [(element * 2 if element % 2 == 0 else element) for element in x] Output: [1, 4, 3, 8, 5, 12, 7, 16, 9, 20] If you notice what we’ve done here, apart from skipping multiple lines of code, is that we used the if-else statement in the same sentence. Now, if you wanted to perform filtering, you could do this: x=[1, 2, 3, 4, 5, 6, 7, 8, 9, 10] [element * 2 for element in x if element % 2 == 0] Output: [4, 8, 12, 16, 20] What we’ve done here is put the if statement after the for declaration, and Voila! We’ve achieved filtering. If you’re using a nice IDE like Jupyter Notebooks or PyCharm, they will help you format your code as per the PEP 8 suggestions. Why should you write Pythonic code? Well firstly, you’re saving loads of time writing humongous piles of cowdung code, so you’re obviously becoming a smarter and more productive programmer. Python is a pretty slow language, and when you’re trying to do something in Python, which is acquired from another language like Java or C++, you’re going to worsen things. With idiomatic, Pythonic code, you’re improving the speed of your programs. Moreover, idiomatic code is far easier to comprehend and understand for other developers who are working on the same code. It helps a great deal when you’re trying to refactor someone else’s code. Fearing Pythonic idioms Well, I don’t mean the idioms themselves are scary. Rather, quite a few developers and organisations have begun discriminating on the basis of whether someone can or cannot write Pythonic code. This is wrong, because, at the end of the day, though the PEP 8 exists, the idea of the term Pythonic is different for different people. To some it might mean picking up a new style guide and improving the way you code. To others, it might mean being succinct and not repeating themselves. It’s time we stopped judging people on whether they can or can’t write Pythonic code and instead, we should appreciate when someone is able to present readable, easily maintainable and succinct code. If you find them writing a bit of clumsy code, you can choose to talk to them about improving their design considerations. And the world will be a better place! If you’re interested in learning how to write more succinct and concise Python code, check out these resources: Learning Python Design Patterns - Second Edition Python Design Patterns [Video] Python Tips, Tricks and Techniques [Video]    
Read more
  • 0
  • 2
  • 73524

article-image-what-are-generative-adversarial-networks-gans-and-how-do-they-work
Richard Gall
11 Sep 2018
3 min read
Save for later

What are generative adversarial networks (GANs) and how do they work? [Video]

Richard Gall
11 Sep 2018
3 min read
Generative adversarial networks, or GANs, are a powerful type of neural network used for unsupervised machine learning. Made up of two competing models which run in competition with one another, GANs are able to capture and copy variations within a dataset. They’re great for image manipulation and generation, but they can also be deployed for tasks like understanding risk and recovery in healthcare and pharmacology. GANs are actually pretty new - they were first introduced by Ian Goodfellow in 2014. Goodfellow developed them to tackle some of the issues with similar neural networks, including the Boltzmann machine and autoencoders. Both the Boltzmann machine and autoencoders use the Markov Decision Chain which has a pretty high computational cost. This efficiency gives engineers significant gains - which you need if you’re working at the cutting edge of artificial intelligence. How do Generative Adversarial Networks work? Let's start with a simple analogy. You have a painting - say the Mona Lisa - and we have a master forger who wants to create a duplicate painting. The forger does this by learning how the original painter - Leonardo Da Vinci - produced the painting. Meanwhile, you have an investigator trying to capture the forger and ‘second guess’ the rules the forger is learning. To map this onto the architecture of a GAN, the forger is the generator network, which learns the distribution of classes while the investigator is the discriminator network, which learning the boundaries between those classes - the formal ‘shape’ of the dataset. Applications of GANs Generative adversarial networks are used for a number of different applications. One of the best examples is a Google Brain project back in 2016 - researchers used GANs to develop a method of encryption. This project used 3 neural networks - Alice, Bob, and Eve. Alice’s job was to send an encrypted message to Bob. Bob’s job was to decode that message, while Eve’s job was to intercept it. To begin with Alice’s messages were easily intercepted by Eve. However, thanks to Eve’s adversarial work, Alice began to develop its own encryption strategy - it took 15,000 runs for Alice to successfully encrypt a message that could be deciphered by Bob that Eve couldn’t intercept. Elsewhere, GANs are also being used in fields such as drug research. The neural networks can be trained on the existing drugs and suggest new synthetic chemical structures that improve on drugs that already exist. Generative adversarial networks: the cutting edge of artificial intelligence As we’ve seen, GANs offer some really exciting opportunities in artificial intelligence. There are two key advantages you need to remember: GANs solve the problem of generating data when you don’t have enough to begin with and they require no human supervision. This is crucial when you think about the cutting edge of artificial intelligence, both in terms of the efficiency of running the models, and the real-world data we want to use - which could be poor quality or have privacy and confidentiality issues, as much healthcare data does.
Read more
  • 0
  • 0
  • 59896

article-image-how-will-ai-impact-job-roles-in-cybersecurity
Melisha Dsouza
25 Sep 2018
7 min read
Save for later

How will AI impact job roles in Cybersecurity

Melisha Dsouza
25 Sep 2018
7 min read
"If you want a job for the next few years, work in technology. If you want a job for life, work in cybersecurity." -Aaron Levie, chief executive of cloud storage vendor Box The field of cybersecurity will soon face some dire, but somewhat conflicting, views on the availability of qualified cybersecurity professionals over the next four or five years. Global Information Security Workforce Study from the Center for Cyber Safety and Education, predicts a shortfall of 1.8 million cybersecurity workers by 2022. The cybersecurity workforce gap will hit 1.8 million by 2022. On the flipside,  Cybersecurity Jobs Report, created by the editors of Cybersecurity Ventures highlight that there will be 3.5 million cybersecurity job openings by 2021. Cybercrime will feature more than triple the number of job openings over the next 5 years. Living in the midst of a digital revolution caused by AI- we can safely say that AI will be the solution to the dilemma of “what will become of human jobs in cybersecurity?”. Tech enthusiasts believe that we will see a new generation of robots that can work alongside humans and complement or, maybe replace, them in ways not envisioned previously. AI will not make jobs easier to accomplish, but also bring about new job roles for the masses. Let’s find out how. Will AI destroy or create jobs in Cybersecurity? AI-driven systems have started to replace humans in numerous industries. However, that doesn’t appear to be the case in cybersecurity. While automation can sometimes reduce operational errors and make it easier to scale tasks, using AI to spot cyberattacks isn’t completely practical because such systems yield a large number of false positives. It lacks the contextual awareness which can lead to attacks being wrongly identified or missed completely. As anyone who’s ever tried to automate something knows, automated machines aren’t great at dealing with exceptions that fall outside of the parameters to which they have been programmed. Eventually, human expertise is needed to analyze potential risks or breaches and make critical decisions. It’s also worth noting that completely relying on artificial intelligence to manage security only leads to more vulnerabilities - attacks could, for example, exploit the machine element in automation. Automation can support cybersecurity professionals - but shouldn’t replace them Supported by the right tools, humans can do more. They can focus on critical tasks where an automated machine or algorithm is inappropriate. In the context of cybersecurity, artificial intelligence can do much of the 'legwork' at scale in processing and analyzing data, to help inform human decision making. Ultimately, this isn’t a zero-sum game -  humans and AI can work hand in hand to great effects. AI2 Take, for instance, the project led by the experts at the Massachusetts Institute of Technology’s Computer Science and Artificial Intelligence Lab. AI2 (Artificial Intelligence + Analyst Intuition) is a system that combines the capabilities of AI with the intelligence of human analysts to create an adaptive cybersecurity solution that improves over time. The system uses the PatternEx machine learning platform, and combs through data looking for meaningful, predefined patterns. For instance, a sudden spike in postback events on a webpage might indicate an attempt at staging a SQL injection attack. The top results are then presented to a human analyst, who will separate any false positives and flags legitimate threats. The information is then fed into a virtual analyst that uses human input to learn and improve the system’s detection rates. On future iterations, a more refined dataset is presented to the human analyst, who goes through results and once again "teaches" the system to make better decisions. AI2 is a perfect example that shows man and machine can complement each other’s strengths to create something even more effective. It’s worth remembering that in any company that uses AI for cybersecurity, automated tools and techniques require significant algorithm training, data markup. New cybersecurity job roles and the evolution of the job market The bottom line of this discussion is that- AI will not destroy cybersecurity jobs, but it will drastically change them. The primary focus of many cybersecurity jobs can be going through the hundreds of security tools available, determining what tools and techniques are most appropriate for their organization’s needs. Of course, as systems move to the cloud, these decisions will already be made because cloud providers will offer in-built security solutions. This means that the number of companies that will need a full staff of cybersecurity experts will be drastically reduced. Instead, companies will need more individuals that understand issues like the potential business impact and risk of different projects and architectural decisions. This demands a very different set of skills and knowledge compared to the typical current cybersecurity role - it is less directly technical and will require more integration with other key business decision makers. AI can provide assistance, but it can’t offer easy answers. Humans and AI working together Companies concerned with cybersecurity legal compliance and effective real-world solutions should note that cybersecurity and information technology professionals are best-suited for tasks such as risk analysis, policy formulation and cyber attack response. Human intervention is can help AI systems to learn and evolve. Take the example of Spain-based antivirus company called Panda Security, that had a number of people reverse-engineering malicious code and writing signatures. In today's times, to keep pace with overflowing amounts of data, the company would need hundreds of thousands of engineers to deal with malicious code. Enter AI and only a small team of engineers are required-- to look at more than 200,000 new malware samples per day. Is AI going to steal cybersecurity engineers their jobs? So what about former employees that used to perform this job? Have they been laid off? The answer is a straight No! But they will need to upgrade their skill set. In the world of cybersecurity, AI is going to create new jobs, as it throws up new problems to be analyzed and solved. It’s going to create what are being called "new collar" jobs - this is something that IBM’s hiring strategy has already taken into account. Once graduates enter the IBM workforce, AI enters the equation to help them get a fast start. Even the Junior analysts can have the ability to investigate a new malware infecting mobile phones of employees. AI would quickly research the new malware impacting the phones, and identify the characteristics reported by others and to provide a recommended course of action. This would relieve analysts from the manual work of going through reams of data and lines of code - in theory, it should make their job more interesting and more fun. Artificial intelligence and the human workforce, then, aren’t in conflict when it comes to cybersecurity. Instead, they can complement each other to create new job opportunities that will test the skills of the upcoming generation, and lead experienced professionals into new and maybe more interesting directions. It will be interesting to see how cybersecurity workforce makes use of AI in the future. Intelligent Edge Analytics: 7 ways machine learning is driving edge computing adoption in 2018 15 millions jobs in Britain at stake with AI robots set to replace humans at workforce 5 ways artificial intelligence is upgrading software engineering    
Read more
  • 0
  • 0
  • 59760

article-image-what-are-the-challenges-of-adopting-ai-powered-tools-in-sales-how-salesforce-can-help
Guest Contributor
24 Aug 2019
8 min read
Save for later

What are the challenges of adopting AI-powered tools in Sales? How Salesforce can help

Guest Contributor
24 Aug 2019
8 min read
Artificial intelligence is a hot topic for many industries. When it comes to sales, the situation gets complicated. According to the latest Salesforce State of Sales report, just 21% of organizations use AI in sales today, while its adoption in sales is expected to grow 155% by 2020. Let’s explore what keeps sales teams from implementing AI and how to overcome these challenges to unlock new opportunities. Why do so few teams adopt AI in Sales There are a few reasons behind such a low rate of AI application in sales. First, some teams don’t feel they are prepared to integrate AI into their existing strategies. Second, AI technologies are often applied in a hectic way: many businesses have high expectations of AI and concentrate mostly on its benefits rather than contemplating possible difficulties upfront. Such an approach rarely results in positive business transformation. Here are some common challenges that businesses need to overcome to turn their sales AI projects into success stories. Businesses don’t know how to apply AI in their workflow Problem: Different industries call for different uses of AI. Still, companies tend to buy AI platforms to use them for the same few popular tasks, like predictions based on historical data or automatic data logging. In reality, the business type and direction should dictate what AI solution will best fit the needs of an organization. For example, in e-commerce, AI can serve dynamic product recommendations on the basis of the customer’s previous purchases or views. Teams relying on email marketing can use AI to serve personalized email content as well as optimize send times. Solution: Let a sales team participate in AI onboarding. Prior to setup, gain insight into your sales reps’ daily routine, needs, and pains. Then, get their feedback continuously during the actual AI implementation. Such a strategy will ensure the sales team benefits from a tailored, rather than a generic, AI system. AI requires data businesses don’t have Problem: AI is most efficient when fed with huge amounts of data. It’s true, a company with a few hundred leads per week will train AI for better predictions than the company with the same amount of leads per month. Frequently, companies assume they don’t have so much data or they cannot present it in a suitable format to train an AI algorithm. Solution: In reality, AI can be trained with incomplete and imperfect data. Instead of trying to integrate the whole set of data prior to implementing AI, it’s possible to use it with data subsets, like historical purchase data or promotional campaign analytics. Plus, AI can improve the quality of data by predicting missing elements or identifying possible errors. Businesses lack skills to manage AI platforms Problem: AI is a sophisticated algorithm that requires special skills to implement and use it. Thus, sales teams need to be augmented with specialized knowledge in data management, software optimization, and integration. Otherwise, AI tools can be used incorrectly and thus provide little value. Solution: There are two ways of solving this problem. First, it’s possible to create a new team of big data, machine learning, and analytics experts to run AI implementation and coordinate it with the sales team. This option is rather time-consuming. Second, it’s possible to buy an AI-driven platform, like Salesforce, for example, that includes both out-of-the-box features as well as plenty of customization opportunities. Instead of hiring new specialists to manage the platform, you can reach out to Salesforce consultants who will help you select the best-fit plan, configure, and implement it. If your requirements go beyond the features available by default, then it’s possible to add custom functionality. How AI can change the sales of tomorrow When you have a clear vision of the AI implementation challenges and understand how to overcome them, it’s time to make use of AI-provided benefits. A core benefit of any AI system is its ability to analyze large amounts of data across multiple platforms and then connect the dots, i.e. draw actionable conclusions. To illustrate these AI opportunities, let’s take Salesforce, one of the most popular solutions in this domain today, and see how its AI technology, Einstein, can enhance a sales workflow. Time-saving and productivity boost Administrative work eats up sales reps’ time that they can spend selling. That’s why many administrative tasks should be automated. Salesforce Einstein can save time usually wasted on manual data entry by: Automating contact creation and update Activity logging Generating lead status reports Syncing emails and calendars Scheduling meetings Efficient lead management When it comes to leads, sales reps tend to base their lead management strategies on gut feeling. In spite of its importance, intuition cannot be the only means of assessing leads. The approach should be more holistic. AI has unmatched abilities to analyze large amounts of information from different sources to help score and prioritize leads. In combination with sales reps’ intuition, such data can bring lead management to a new level. For example, Einstein AI can help with: Scoring leads based on historical data and performance metrics of the best customers Classifying opportunities in terms of their readiness to convert Tracking reengaged opportunities and nurturing them Predictive forecasting AI is well-known for its predictive capabilities that help sales teams make smarter decisions without running endless what-if scenarios. AI forecasting builds sales models using historical data. Such models anticipate possible outcomes of multiple scenarios common in sales reps’ work. Salesforce Einstein, for example, can give the following predictions: Prospects most likely to convert Deals most likely to close Prospects or deals to target New leads Opportunities to upsell or cross-sell The same algorithm can be used for forecasting sales team performance during a specified period of time and taking proactive steps based on those predictions. What’s more, sales intelligence is shifting from predictive to prescriptive, where prescriptive AI does not recommend but prescribes exact actions to be taken by sales reps to achieve a particular outcome. Watching out for pitfalls of AI in sales While AI promises to fulfil sales reps’ advanced requests, there are still some fears and doubts around it. First of all, as a rising technology, AI still carries ethical issues related to its safe and legitimate use in the workplace, such as those of the integrity of autonomous AI-driven decisions and legitimate origin of data fed to algorithms. While the full-fledged legal framework is yet to be worked out, governments have already stepped in. For example, the High-Level Expert Group on AI of the European Commission came up with the Ethics Guidelines for Trustworthy Artificial Intelligence covering every aspect from human oversight and technical robustness to data privacy and non-discrimination. In particular, non-discrimination relates to potential bias,, such as algorithmic bias that comes from human bias when sourcing data, and the one where correlation does not equal causation. Thus, AI-driven analysis should be incorporated in decision-making cautiously as just one of the many sources of insights. AI won’t replace a human mind⁠—the data still needs to be processed critically. When it comes to sales, another common concern is that AI will take sales reps’ jobs. Yes, some tasks that are deemed monotonous and time-consuming are indeed taken over by AI automation. However, it is actually a blessing as AI does not replace jobs but augments them. This way, sales reps can have more time on their hands to complete more creative and critical tasks. It's true, however, that employers would need people who know how to work with AI technologies. It means either ongoing training or new hires, which can be rather costly. The stakes are high, though. To keep up with the fast-changing world, one has to bargain their way to success, finding one’s way around current limitations and challenges. In a nutshell AI is key to boosting sales team performance. However, successful AI integration into sales and marketing strategies requires teams to overcome challenges posed by sophisticated AI technologies. Such popular AI-driven platforms like Salesforce help sales reps get hold of the AI potential as well as enjoy vast opportunities for saving time and increasing productivity. Author Bio Valerie Nechay is MarTech and CX Observer at Iflexion, a Denver-based custom software development provider. Using her writing powers, she's translating complex technologies into fascinating topics and shares them with the world. Now her focus is on Salesforce implementation how-tos, challenges, insights, and shortcuts, as well as broader applications of enterprise tech for business development. IBM halt sales of Watson AI tool for drug discovery amid tepid growth: STAT report. Salesforce Einstein team open sources TransmogrifAI, their automated machine learning library How to create sales analysis app in Qlik Sense using DAR method [Tutorial]
Read more
  • 0
  • 0
  • 58315
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at €18.99/month. Cancel anytime
article-image-is-youtubes-ai-algorithm-evil
Amarabha Banerjee
30 Sep 2018
6 min read
Save for later

Is YouTube's AI Algorithm evil?

Amarabha Banerjee
30 Sep 2018
6 min read
YouTube is at the center of content creation, content distribution, and advertising activities for some time now. The impact of YouTube can be estimated from the 1.8 billion YouTube users worldwide. While the YouTube video hosting concept has been a great success story for content creators, the video viewing and recommendation model has been in the middle of a brewing controversy lately. The Controversy Logan Paul was already a top rated YouTube star when he stumbled across a hanging dead body in a Japanese forest which is famous as a suicide spot. After the initial shock and awe, Logan Paul seemed quite amused and commented “Dude, his hands are purple,” then he turned to his friends and giggled. “You ever stand next to a dead guy?”. This particular instance was a shocking moment for YouTubers all across the globe. Disapproving reactions had poured in and the video was taken down 24 hours later by YouTube. In those 24 hours, the video managed to garner 6 million views. Even after the furious backlash, users complained that they were still seeing recommendations of Logan Paul’s videos. That brought the emphasis back on the recommendation system that YouTube uses. YouTube Video Recommendation Back in 2005, when YouTube first started out, it had a uniform homepage for all users. This meant that every YouTube user would see the same homepage and the creators who would feature there, would get a huge boost in their viewership. Their selection was based on their subscriber count, views and user engagement metrics e.g. likes, comments, shares etc. This inspired other users to become creators and start contributing content to become a part of the YouTube family. In 2006, YouTube was bought by Google. Their policies and homepage started evolving gradually. As ads started showing on YouTube videos, the scenario changed quite quickly. Also, with the rapid rise in the number of users, Google had thought it to be a good idea to curate the homepage as per each user’s watch history, subscriptions, and likes. This was a good move in principle since it helped the users to see what they wanted to see. As a part of their next level innovation, a machine learning model was created to suggest or recommend videos to users. The goal of this deep neural network based recommendation engine was to increase watch time of every video so that users stay longer on the platform. What did it change and How When Youtube’s machine learning algorithm shows a few videos in your feed as “Recommended for you”, it predicts what you want to see from your watch history and watch history of similar users. If you interact with any of these videos and watch it for a certain amount of time, the recommendation engine considers it as a success and starts curating a list based on your interactions with its suggested videos. The more data it gathers about your choices and watch history, the more confident it becomes of its own video decisions. The major goal of Youtube’s recommendation engine is to attract your attention and get you hooked to the platform to get more watch time. More watch time means more revenue and more scope for targeted ads. What this changes, is the fundamental concept of choice and the exercising of user discretion. The moment the YouTube Algorithm considers watch time as the most important metric to recommend videos to you, less importance goes into the organic interactions on YouTube, which includes liking, commenting and subscribing to videos and channels. Users get to see video recommendations based on the YouTube Algorithm’s user understanding and its goal of maximizing watch time, with less importance given to user choices. Distorted Reality and YouTube This attention maximizing model is the fundamental working mechanism of mostly all social media networks. But YouTube has not been implicated in the accusation of distorting reality and spreading the fake news as much as Facebook has been in mainstream media. But times are changing and so are the viewpoints related to YouTube’s influence on the global population and its ability to manipulate important public opinion. Guillaume Chaslot, a 36-year-old French computer programmer with a Ph.D. in artificial intelligence, was one of those engineers who was in the core team to develop and perfect the YouTube algorithm. In his own words “YouTube is something that looks like reality, but it is distorted to make you spend more time online. The recommendation algorithm is not optimizing for what is truthful, or balanced, or healthy for democracy.” Chaslot explains that the algorithm never stays the same. It is constantly changing the weight it gives to different signals; the viewing patterns of a user, for example, or the length of time a video is watched before someone clicks away.” Chaslot was fired by Google in 2013 over performance issues. His claim was that he wanted to bring about a change in the approach of the YouTube algorithm to make it more aligned with democratic values instead of being devoted to just increasing the watch time. Where are we headed I am not qualified or righteous enough to answer the direct question - is YouTube good or bad. YouTube creates opportunities for millions of creators worldwide to showcase their talent and present it to a global audience without worrying about country or boundaries. This itself is a huge power for an internet application. But the crucial point to remember here is whether YouTube is using this power to just make the users glued to the screen. Do they really care if you are seeing divisive content or prejudiced flat earther conspiracies as recommended videos? The algorithm can be tweaked to include parameters which will remove unintended bias such as whether a video is propagating fake news or influencing voters minds in an unlawful way. But that is near impossible as machines lack morality or empathy or even common sense. To incorporate humane values such as honesty and morality into an AI system is like creating an AI that is more human than a machine. This is why machine augmented human intelligence will play a more and more crucial role in the near future. The possibilities are endless, be it good or bad. Whether we progress or digress, might not be in our hands anymore. But what might be in our hands is to come together to put effective checkpoints to identify and course correct scenarios where algorithms rule wild. Sex robots, artificial intelligence, and ethics: How desire shapes and is shaped by algorithms Like newspapers, Google algorithms are protected by the First amendment California replaces cash bail with algorithms
Read more
  • 0
  • 0
  • 52162

article-image-dl-frameworks-tensorflow-vs-cntk
Aaron Lazar
30 Oct 2017
6 min read
Save for later

The Deep Learning Framework Showdown: TensorFlow vs CNTK

Aaron Lazar
30 Oct 2017
6 min read
The question several Deep Learning engineers may ask themselves is: Which is better, TensorFlow or CNTK? Well, we're going to answer that question for you, taking you through a closely fought match between the two most exciting frameworks. So, here we are, ladies and gentlemen, it's fight night and it's a full house. In the Red corner, weighing in at two hundred and seventy pounds of Python and topping out at over ten thousand frames per second; managed by the American tech giant, Google; we have the mighty, the beefy, TensorFlow! In the Blue corner, weighing in at two hundred and thirty pounds of C++ muscle, we have, one of the top toolkits that can comfortably scale beyond a single machine. Managed by none other than Microsoft, it's fast, it's furious, it's CNTK aka the Microsoft Cognitive Toolkit! And we're into Round One… TensorFlow and CNTK are looking quite menacingly at each other and are raging to take down their opponents. TensorFlow seems pleased that its compile times are considerably faster than its successor, Theano. Although, it looks like happiness came a tad bit soon. CNTK, light and bouncy on it's feet, comes straight out of nowhere with a whopping seventy thousand frames/second upper cut, knocking TensorFlow to the floor. TensorFlow looks like it's in no mood to give up anytime soon. It makes itself so simple to use and understand that even students can pick it up and start training their own models. This isn't the case with CNTK, as it begs to shed its complexity. On the other hand, CNTK seems to be thrashing TensorFlow in terms of 3D convolution, where CNTK can clearly recognize images from streaming content. TensorFlow also tries its best to run LSTM RNNs, but in vain. The crowd keeps cheering on… Wait a minute...are they calling out for TensorFlow? Yes they are! There's hardly any cheering for CNTK. This is embarrassing! Looks like its community support can't match up to TensorFlow's. And ladies and gentlemen, that does make a difference - we can see TensorFlow improving on several fronts and gradually getting back in the game! TensorFlow huffs and puffs as it tries to prove that it's not just about deep learning and that it has tools in the pocket that can support other algorithms such as reinforcement learning. It conveniently whips out the TensorBoard, and drops CNTK to the floor with its beautiful visualizations. TensorFlow now has the upper hand and is trying hard to pin CNTK to the floor and tries to use its R support to finish it off. But CNTK tactfully breaks loose and leaves TensorFlow on the floor - still not ready to be used in production. And there goes the bell for Round One! Both fighters look exhausted but you can see a faint twinkle in TensorFlow's eye, primarily because it survived Round One. Google seems to be working hard to prep it for Round Two and is making several improvements in terms of speed, flexibility and majorly making it ready for production. Meanwhile, Microsoft boosts CNTK's spirits with a shot of Python APIs in its blood. As it moves towards reaching version 2.0, there are a lot of improvements to CNTK, wherein, Microsoft has ensured that it's not left behind, like having a backend for Keras, which puts it on par with TensorFlow. Moreover, there seem to be quite a few experimental features that it looks ready to enter the ring with, like the Java API for example. It's the final round and boy, are these two into a serious stare-down! The referee waves them in and off they are. CNTK needs to get back at TensorFlow. Comfortably supporting multiple GPUs and CPUs out of the box, across both the Microsoft and Linux platforms, it has an advantage over TensorFlow. Is it going to use that trump card? Yes it is! A thousand GPUs and a hundred machines in, and CNTK is raining blows on TensorFlow. TensorFlow clearly drops the ball when it comes to multiple machines, and it rather complicates things. It's high time that TensorFlow turned the tables. Lo and behold! It shows off its mobile deep learning capabilities with TensorFlow Lite, clearly flipping CNTK flat on its back. This is revolutionary and a tremendous breakthrough for TensorFlow! CNTK, however, is clearly the people's choice when it comes to language compatibility. With support for C++, Python, C#/.NET and now Java, it's clearly winning in this area. Round Two is coming to an end, ladies and gentlemen and it's a neck to neck battle out there. We're not sure the judges are going to be able to choose a clear winner, from the looks of it. And…. there goes the bell! While the scores are being tallied, we go over to the teams and some spectators for some gossip on the what's what of deep learning. Did you know having multiple machine support is a huge advantage? It increases speed and efficiency by almost 10 times! That's something! We also got to know that TensorFlow is training hard and is picking up positives from its rival, CNTK. There are also rumors about a new kid called MXNet (read about it here), that has APIs in R, Python and even in Julia! This makes it one helluva framework in terms of flexibility and speed. In fact, AWS is already implementing it while Apple also is rumored to be using it. Clearly, something to watch out for. And finally, the judges have made their decision. Ladies and gentlemen, after two rounds of sheer entertainment, we have the results... TensorFlow CNTK Processing speed 0 1 Learning curve 1 0 Production readiness 0 1 Community support 1 0 CPU, GPU computation support 0 1 Mobile deep learning 1 0 Multiple language compatibility 0 1 It's a unanimous decision and just as we thought, CNTK is the heavyweight champion! CNTK clearly beat TensorFlow in terms of performance, because of its flexibility, speed and ability to use in production! As a Deep Learning engineer, should you be wanting to use one of these frameworks in your tasks, you should check out their features thoroughly, test them out with a test dataset and then implement them to your actual data. After all, it's the choices we make that define a win or a loss - simplicity over resource utilisation, or speed over platform, we must choose our tools wisely. For more information on the kind of tests that both the tools have been put through, read the Research Paper presented by Shaohuai Shi, Qiang Wang, Pengfei Xu and Xiaowen Chu from the Department of Computer Science, Hong Kong Baptist University and these benchmarks.
Read more
  • 0
  • 1
  • 50232

article-image-what-is-aiops-why-going-to-be-important
Aaron Lazar
19 Apr 2018
4 min read
Save for later

What is AIOps and why is it going to be important?

Aaron Lazar
19 Apr 2018
4 min read
Woah, woah, woah! Wait a minute! First there was that game SpecOps that I usually sucked at, then there came ITOps and DevOps that took the world by storm, now there’s another something-Ops?? Well, believe it or not, there is, and they’re calling it AIOps. What does AIOps stand for? AIOps basically means Artificial Intelligence for IT Operations. It means IT operations are enhanced by using analytics and machine learning to analyze the data that’s collected from various IT operations tools and devices. This helps in spotting and reacting to issues in real time. Coined by Gartner, the term has grown in popularity over the past year. Gartner believes that AIOps will be a major transformation for ITOps professionals mainly due to the fact that traditional IT operations cannot cope with the modern digital transformation. Why is AIOps important? With the massive and rapid shift towards cloud adoption, automation and continuous improvement, AIOps is here to take care of the new entrants into the digital ecosystem - Machine agents, artificial intelligence, IoT devices, etc. These new entrants are impossible to service and maintain by humans and with billions of devices connected together, the only way forward is to employ algorithms that tackle known problems. Some of the solutions it provides are maintaining high availability and monitoring performance, event correlation and analysis, automation and IT service management. How does AIOps work? As depicted in Gartner’s diagram, there are two primary components to AIOps. Big Data Machine Learning Data is gathered from the enterprise. You then implement a comprehensive analytics and machine learning strategy alongside the combined IT data (monitoring data + job logs + tickets + incident logs). The processed data yields continuous insights, continuous improvements and fixes. It bridges three different IT disciplines to accomplish its goals: Service management Performance management, and Automation To put it simply, it is a strategic focus. It argues for a new approach in a world where big data and machine learning have changed everything. How to move from ITOps to AIOps Machine Learning Most of AIOps will involve supervised learning and professionals will need a good understanding of the underlying algorithms. Now don’t get me wrong, they don’t need to be full blown data scientists to build the system, but just having sufficient knowledge to be able to train the system to pick up anomalies. Auditing these systems to ensure they’re performing the tasks as per the initial vision is necessary and this will go hand in hand with scripting them. Understanding modern application technologies With the rise of Agile software development and other modern methodologies, AIOps professionals are expected to know all about microservices, APIs, CI/CD, containers, etc. With the giant leaps that cloud development is taking, it is expected to gain visibility into cloud deployments, with an emphasis on cost and performance. Security Security is critical, for example, it’s important for personnel to understand how to engage a denial of service attack or maybe a ransomware attack, like the ones we’ve seen in the recent past. Training machines to detect/predict such events is pertinent to AIOps. The key tools in AIOps There are a wide variety of AIOps platforms available in the market that bring AI and Intelligence to IT Operations. One of the most noteworthy ones is Splunk, which has recently incorporated AI for intelligence driven operations. Another one is the Moogsoft AIOps platform, that is quite similar to Splunk. BMC also has entered the fray, launching TrueSight 11, their AIOps platform that promises to address use cases to improve performance and capacity management, the service desk, and application development disciplines. Gartner has a handy list of top platforms. If you’re planning the transition from ITOps, do check out the list. Companies like Frankfurt Cargo Services and Revtrak have already added the AI to their Ops. So, are you going to make the transition? According to Gartner, 40% of large enterprises would have made the transition to AIOps by 2022. If you’re one of them, I recommend you do it for the right reasons, but don’t do it overnight. The transition needs to be gradual and well planned. The first thing you need to do is getting your enterprise data together. If you don’t have sufficient data that’s worthy of analysis, AIOps isn’t going to help you much. Read more: Bridging the gap between data science and DevOps with DataOps.
Read more
  • 0
  • 0
  • 49058

article-image-5-examples-of-artificial-intelligence-in-web-apps
Sugandha Lahoti
20 Aug 2018
7 min read
Save for later

5 examples of Artificial Intelligence in Web apps

Sugandha Lahoti
20 Aug 2018
7 min read
Modern day web app development is increasingly focused on building a customer-facing front-end presence with the use of Artificial Intelligence. Web apps, use Artificial Intelligence not just for intelligent automation, but also for building recommendation engines, website implementation, and image recognition, among other application areas. In this post, we look at five key areas, illustrated by real-world examples, where web apps are employing Artificial intelligence to automate some part of their system. Recommendation Engines of Amazon and Netflix Curating content based on the user’s context is one of the most widely used AI features in web apps. Amazon, for instance, uses item-based collaborative filtering for product classification. Amazon’s recommendation system uses a combination of goods-based recommendation (users are recommended for those similar to what they liked in the past) and buddy-based recommendation (users are recommended things which their Facebook friends like.) Not just for their recommendation system, Amazon has been using AI for multiple tasks. Their AI Management Strategy is called The Flywheel, where one part of Amazon acts as a catalyst for AI and machine learning growth in other areas. Read more: Four interesting Amazon patents in 2018 that use machine learning, AR, and robotics Another popular example is Netflix, who revamped their recommendation algorithm based on visual impressions. One of their research projects indicated that the artwork was not only the biggest influencer to a viewer's decision to watch content, but it also drew over 82% of their focus while browsing Netflix. This made them develop a new image recommendation algorithm which works in real time to project the image it thinks the user will respond to. They use implicit (user behavior) and Explicit data (user activity) and then feed this data to machine learning algorithms to figure out the relevant content for each user. For each title, users get the image with the highest rank based on their profile. Side by side, it continues collecting data from its 100 million other subscribers to improve its engine’s performance. Read more: What software stack does Netflix use? Google and Microsoft using Image recognition Image recognition can serve multiple uses for web apps including object and pattern recognition, locating duplicates (exact or partial), image search by fragments, and more. Two such unique applications of image recognition are Google’s Quickdraw and Microsoft’s Captionbot.ai. Quick Draw is Google’s AI-powered web app game, where users have to draw an everyday object that a neural network tries to recognize. Players are given 20 seconds to draw a random item, and Google’s neural network tries to match it with other 50 million hand-drawn sketches by other players to identify the correct one. Quickdraw aims to generate the world’s largest doodling data set, which is shared publicly to help further machine learning research. The data preserves user privacy by collecting only anonymous metadata, including timestamp, country code, whether or not the drawing was recognized, and which word the drawing corresponded to. This dataset was used in SketchRNN, a neural network that can draw words and interpolate between drawings. Another image recognition web app is Microsoft’s Captionbot.ai. The system can automatically generate a caption for an uploaded photograph. Users can rate how accurately it has detected what was on display. The algorithm learns from the rating, to make the captions more accurate. It uses three separate services to process the images. The Computer Vision API identifies the components of the photo, then mixes it with data from the Bing Image API, and runs any faces it spots through Emotion API. The Emotion API analyses facial expressions to detect anger, contempt, disgust, fear, and other traits. Based on the results from these APIs, the caption is generated. Google Docs powered by Natural Language Processing Modern Web apps can also be fueled with cognitive capabilities to make them stand apart from other apps. Instances of this include transforming human speech to text or conversing with people in natural language. One such example of a web app which includes natural language processing is Google Docs. Google Docs and Slides have an Explore feature to show text, images, and other features relevant to the document that a user is working on at any given point.  Docs can also use natural language to search through data and reports, and automatically generate formulas in Sheets. Google Docs recently incorporated an AI grammar checker, announced at Google Cloud Next. It uses a machine translation algorithm to recognize errors and suggest corrections as users type. Google Docs can also be integrated with Natural Language API to recognize the sentiment of selected text in a Google Doc and highlight it based on that sentiment. Web-based artificial intelligence Chatbots Web-based chatbots are just like app-based chatbots albeit they interact with users in the website browser. They use AI techniques such as natural language understanding and pattern recognition to store and distinguish between the context of the information provided and elicit a suitable response for future replies. An example of web-based chatbots are the Live Chat bots where the conversation with a visitor on a website is automated using a chatbot. Many live chat software companies are already experimenting with chatbots. Examples include the Operator bot used by Intercom, a company building customer messaging platform or Driftbot by Drift which gives your website a personal assistant. Read More: Top 4 chatbot development frameworks for developers Another example, are AI based chatbots which help in creating full websites. Right Click is a startup that introduced an A.I.-powered chatbot which uses Artificial Intelligence in a conversational interface to create websites. It asks general questions during the conversation like “What industry you belong to?” and “Why do you want to make a website?” and creates customized templates as per the given answers. Similarly, Wix’s Artificial Intelligence Design bot can tailor websites by learning about each person’s or business’ own needs. Web-based code helpers using AI Intelligent coding assistants are gaining popularity with their ability to understand the code and provide right suggestions at the right time. They can analyze code on the web and give fast and smart completions. Codota for Chrome is a smart web-based IDE which can build predictive models of code and suggest code completions and related content based on the current context present in the code. It combines program analysis, natural language processing, and machine learning to learn from the code. Users can look for Codota’s Icon on every code snippet on their browsers - in GitHub, StackOverflow and others. Another example is Deep Cognition’s Deep Learning Studio – Cloud. It is not exactly an IDE, but it features AI-powered drag & drop interface to help design deep learning models with ease. It features assisted modeling, for automated tensor size calculations and real-time validation. It also has AutoML feature to automatically build a neural network. [dropcap]E[/dropcap]ven though AI is a great choice to enhance your web apps, an important facet to keep in mind is ensuring fairness, accuracy, and transparency of your web apps. For instance, web apps powered by natural language should not discriminate people based on caste, color, or creed or hurt user sentiments. Similarly, those using neural networks for recognizing images should ensure the filtering of obscene images. Creating such types of artificial intelligence systems would require a hybrid of designers, programmers, ML engineers, and researchers. This collective group will have a good grasp of user experience, will be comfortable thinking in abstracts and algorithms, and equally well versed with the social impacts of artificial intelligence. Read More: 20 lessons on bias in machine learning systems by Kate Crawford at NIPS 2017 Uber introduces Fusion.js, a plugin-based web development framework for high-performance apps. Electron Fiddle: A ‘code playground’ for experimenting with cross-platform native apps. Warp: Rust’s new web framework for implementing WAI (Web Application Interface)
Read more
  • 0
  • 0
  • 48928
article-image-5-ways-artificial-intelligence-is-transforming-the-gaming-industry
Amey Varangaonkar
01 Dec 2017
7 min read
Save for later

5 Ways Artificial Intelligence is Transforming the Gaming Industry

Amey Varangaonkar
01 Dec 2017
7 min read
Imagine yourself playing a strategy game, like Age of Empires perhaps. You are in a world that looks real and you are pitted against the computer, and your mission is to protect your empire and defeat the computer, at the same time. What if you could create an army of soldiers who could explore the map and attack the enemies on their own, based on just a simple command you give them? And what if your soldiers could have real, unscripted conversations with you as their commander-in-chief to seek instructions? And what if the game’s scenes change spontaneously based on your decisions and interactions with the game elements, like a movie? Sounds too good to be true? It’s not far-fetched at all - thanks to the rise of Artificial Intelligence! The gaming industry today is a market worth over a hundred billion dollars. The Global Games Market Report says that about 2.2 billion gamers across the world are expected to generate an incredible $108.9 billion in game revenue by the end of 2017. As such, gaming industry giants are seeking newer and more innovative ways to attract more customers and expand their brands. While terms like Virtual Reality, Augmented Reality and Mixed Reality come to mind immediately as the future of games, the rise of Artificial Intelligence is an equally important stepping stone in making games smarter and more interactive, and as close to reality as possible. In this article, we look at the 5 ways AI is revolutionizing the gaming industry, in a big way! Making games smarter While scripting is still commonly used for control of NPCs (Non-playable character) in many games today, many heuristic algorithms and game AIs are also being incorporated for controlling these NPCs. Not just that, the characters also learn from the actions taken by the player and modify their behaviour accordingly. This concept can be seen implemented in Nintendogs, a real-time pet simulation video game by Nintendo. The ultimate aim of the game creators in the future will be to design robust systems within games that understand speech, noise and other sounds within the game and tweak the game scenario accordingly. This will also require modern AI techniques such as pattern recognition and reinforcement learning, where the characters within the games will self-learn from their own actions and evolve accordingly. The game industry has identified this and some have started implementing these ideas - games like F.E.A.R and The Sims are a testament to this. Although the adoption of popular AI techniques in gaming is still quite limited, their possible applications in the near-future has the entire gaming industry buzzing. Making games more realistic This is one area where the game industry has grown leaps and bounds over the last 10 years. There have been incredible advancements in 3D visualization techniques, physics-based simulations and more recently, inclusion of Virtual Reality and Augmented Reality in games. These tools have empowered game developers to create interactive, visually appealing games which one could never imagine a decade ago. Meanwhile, gamers have evolved too. They don’t just want good graphics anymore; they want games to resemble reality. This is a massive challenge for game developers, and AI is playing a huge role in addressing this need. Imagine a game which can interpret and respond to your in-game actions, anticipate your next move and act accordingly. Not the usual scripts where an action X will give a response Y, but an AI program that chooses the best possible alternative to your action in real-time, making the game more realistic and enjoyable for you. Improving the overall gaming experience Let’s take a real-world example here. If you’ve played EA Sports’ FIFA 17, you may be well-versed with their Ultimate Team mode. For the uninitiated, it’s more of a fantasy draft, where you can pick one of the five player choices given to you for each position in your team, and the AI automatically determines the team chemistry based on your choices. The team chemistry here is important, because the higher the team chemistry, the better the chances of your team playing well. The in-game AI also makes the playing experience better by making it more interactive. Suppose you’re losing a match against an opponent - the AI reacts by boosting your team’s morale through increased fan chants, which in turn affects player performances positively. Gamers these days pay a lot of attention to detail - this not only includes the visual appearance and the high-end graphics, but also how immersive and interactive the game is, in all possible ways. Through real-time customization of scenarios, AI has the capability to play a crucial role in taking the gaming experience to the next level. Transforming developer skills The game developer community have always been innovators in adopting cutting edge technology to hone their technical skills and creativity. Reinforcement Learning, a sub-set of Machine Learning, and the algorithm behind the popular AI computer program AlphaGo, that beat the world’s best human Go player is a case in point. Even for the traditional game developers, the rising adoption of AI in games will mean a change in the way games are developed. In an interview with Gamasutra, AiGameDev.com’s Alex Champandard says something interesting: “Game design that hinges on more advanced AI techniques is slowly but surely becoming more commonplace. Developers are more willing to let go and embrace more complex systems.” It’s safe to say that the notion of Game AI is changing drastically. Concepts such as smarter function-based movements, pathfinding, inclusion of genetic algorithms and rule-based AI such as fuzzy logic are being increasingly incorporated in games, although not at a very large scale. There are some implementation challenges currently as to how academic AI techniques can be brought more into games, but with time these AI algorithms and techniques are expected to embed more seamlessly with traditional game development skills. As such, in addition to knowledge of traditional game development tools and techniques, game developers will now have to also skill up on these AI techniques to make smarter, more realistic and more interactive games. Making smarter mobile games The rise of the mobile game industry today is evident from the fact that close to 50% of the game revenue in 2017 will come from mobile games - be it smartphones or tablets. The increasingly high processing power of these devices has allowed developers to create more interactive and immersive mobile games. However, it is important to note that the processing power of the mobile games is yet to catch up to their desktop counterparts, not to mention the lack of a gaming console, which is beyond comparison at this stage. To tackle this issue, mobile game developers are experimenting with different machine learning and AI algorithms to impart ‘smartness’ to mobile games, while still adhering to the processing power limits. Compare today’s mobile games to the ones 5 years back, and you’ll notice a tremendous shift in terms of the visual appearance of the games, and how interactive they have become. New machine learning and deep learning frameworks & libraries are being developed to cater specifically to the mobile platform. Google’s TensorFlow Lite and Facebook’s Caffe2 are instances of such development. Soon, these tools will come to developers’ rescue to build smarter and more interactive mobile games. In Conclusion Gone are the days when games were just about entertainment and passing time. The gaming industry is now one of the most profitable industries of today. As it continues to grow, the demands of the gaming community and the games themselves keep evolving. The need for realism in games is higher than ever, and AI has an important role to play in making games more interactive, immersive and intelligent. With the rate at which new AI techniques and algorithms are developing, it’s an exciting time for game developers to showcase their full potential. Are you ready to start building AI for your own games? Here are some books to help you get started: Practical Game AI Programming Learning game AI programming with Lua
Read more
  • 0
  • 0
  • 48549

article-image-15-useful-python-libraries-to-make-your-data-science-tasks-easier
Amey Varangaonkar
12 Feb 2018
10 min read
Save for later

15 Useful Python Libraries to make your Data Science tasks Easier

Amey Varangaonkar
12 Feb 2018
10 min read
Python has become a big hit in the Data Science community over the last five years. So much so that it is slowly taking over R - the ‘lingua franca of statistics’ - as the preferred choice of tool for many. The recently published Stack Overflow Developer Survey 2018 suggests Python is the next big programming language, and its adoption in the industry is only going to increase. Python’s rise has been staggering, but not really surprising. Its general-purpose nature, coupled with the efficiency and ease of use make it easier for you to build your data science solutions without any hassle. You also have a rich suite of Python libraries available at your disposal for all your Data Science-related tasks - from basic web scraping to something as complex as training deep learning models. In this article, we take a look at some of the most popular and widely used Python libraries and their application areas. Web Scraping Web scraping is a popular information extraction technique from the web using the HTTP protocol, with the help of a web browser. The two most commonly used tools for web scraping are, unsurprisingly, Python-based. 1.Beautiful Soup Beautiful Soup is a popular Python library for extracting information out of the HTML and XML files. It provides a unique, easy way to navigate, search and modify the parsed data, potentially saving you hours of needless work. It works with both the versions of Python, i.e. 2.7 and 3.x and is very easy to use. Check out our latest tutorial on how to scrape web page using the Beautiful Soup. [box type="info" align="" class="" width=""]Editor's Tip: If you’re new to the concept of web scraping, Beautiful Soup should be your go-to library. You can learn more about how to use this library more efficiently in our book Python Web Scraping Cookbook [/box] 2.Scrapy Scrapy is a free, open source framework written in Python. Although developed for web scraping, it can also be used as a general web crawler and extract data using different APIs. Following the ‘Don’t Repeat Yourself’ philosophy of frameworks such as Django, Scrapy includes a set of self-contained crawlers, with each of them following specific instructions with a specific objective. [box type="info" align="" class="" width=""]Editor’s tip: To learn how to use Scrapy for your scraping projects, our book Python Web Scraping, Second Edition is definitely worth checking out. [/box] Scientific Computation and Data Analysis Arguably the most common data science tasks, Python proves to be of great worth to data scientists by providing unique libraries for data manipulation and analysis, as well as mathematical computation. 3. NumPy NumPy is the most popular library for scientific computing in Python and is a part of the larger Python stack for scientific computation called SciPy (discussed below). Apart from its uses in linear algebra and other mathematical functions, it can also be used as a multi-dimensional container, or array, of generic data with arbitrary data types. NumPy integrates seamlessly languages such as C/C++ and because of its support for multiple data types, it works well with a variety of databases as well. 4. SciPy SciPy is a Python-based framework containing open source libraries for mathematics, scientific computation and data analysis.  The SciPy library is a collection of algorithms and tools for advanced mathematical computations, statistics and much more. The SciPy stack consists of the following libraries: NumPy - Python package for numerical computation SciPy - One of the core packages of the SciPy stack for signal processing, optimization and advanced statistics matplotlib - Popular Python library for data visualization SymPy - Library for symbolic mathematics and algebra pandas - Python library for data manipulation and analysis iPython -  Interactive console to run Python-based code 5. pandas pandas is a widely used Python package providing data structures and tools for effective data manipulation and analysis. It is a popularly used tool for Quantitative Analysis and finds a lot of application in algorithmic trading and risk analysis. With a large community of dedicated users, pandas is regularly updated to get new API changes, performance updates and bug fixes. This is one library you definitely need to work with to truly realize its power. [box type="info" align="" class="" width=""]Editor's Tip: To get a more hands-on understanding of how to effectively use pandas for data analysis, make sure you check out our highly popular title pandas Cookbook.[/box] Machine Learning and Deep Learning Python trumps all other languages when it comes to implementing efficient machine learning and deep learning models, simply by virtue of its diverse, effective and easy to use set of libraries. It is worth having a look at the experts’ take on why Python is great for machine learning and Artificial Intelligence. In this section, we see some of the most popular and commonly used Python libraries for machine learning and deep learning: 6. Scikit-learn scikit-learn is the most popular Python library for data mining, analysis and machine learning. It is built using the capabilities of NumPy, SciPy and matplotlib, and is commercially usable. You can implement a variety of machine learning techniques such as classification, regression, clustering and more, using scikit-learn. It is very easy to install and has a clean, slick documentation for anyone looking to get started with it. [box type="info" align="" class="" width=""]Editor’s tip: To understand how to use scikit-learn in your machine learning projects, our bestselling book Python Machine Learning, Second Edition is all you need. If you’re looking to specifically master scikit-learn, Mastering Machine Learning with scikit-learn will prove to be a very useful resource. Check it out! [/box] 7. Tensorflow Tensorflow is the popular machine learning library everyone seems to be talking about today. It is a Python-based framework for effective machine learning and deep learning using multiple CPUs or GPUs. Backed by Google, it was initially developed by the research team of Google Brain, and is the widely used framework in the world for machine intelligence. It enjoys the support of a large community of active users and is finding widespread application for advanced machine learning across a multitude of industrial domains - from manufacturing and retail to healthcare and smart cars. If you are interested to know more about Tensorflow, you can quickly check out the tutorial here. [box type="info" align="" class="" width=""]Editor's Tip: Tensorflow being the most popular framework for machine learning and deep learning, it is one library you should definitely master. Check out the following books to skill up quickly! Machine Learning with TensorFlow 1.x TensorFlow Machine Learning Cookbook Deep Learning with TensorFlow Tensorflow 1.x Deep Learning Cookbook Mastering Tensorflow 1.x [/box] 8. Keras Keras is a Python-based neural networks API, and offers a simplified interface to train and deploy your deep learning models with ease. It has support for a variety of deep learning frameworks such as Tensorflow, Deeplearning4j and CNTK. Keras is very user-friendly, follows a modular approach and supports both CPU and GPU-based computations. If you want to make the deep learning process simpler and effective, this library is definitely worth checking out! [box type="info" align="" class="" width=""]Editor's Tip: If you’re looking for a resource that teaches you how to use Keras effectively, our trending book Deep Learning with Keras will be of great help to you! [/box] 9. PyTorch One of the more recent additions to Python deep learning family is PyTorch, a neural network modeling library with strong GPU support. Although still in a beta stage, this project is backed by bigwigs such as Facebook and Twitter. PyTorch builds on the architecture of Torch, another popular deep library, to enable more efficient tensor computation and implementation of dynamic neural networks. [box type="info" align="" class="" width=""]Editor's Tip: Here is Deep Learning with PyTorch to get you started with this amazing tool. [/box] Natural Language Processing Natural Language Processing pertains to designing of systems that process, interpret and analyze human language, spoken or written. Python offers unique libraries for performing a variety of tasks such as working with structured and unstructured text, predictive analytics and much more. 10. NLTK NLTK is a popular Python library for language processing. It offers easy to use interfaces for a variety of NLP tasks such as text classification, tokenization, text parsing, semantic reasoning and much more. It is an open source, community-driven project, and has support for both Python 2 and Python 3. 11. SpaCy SpaCy is another library for advanced natural language processing, based on Python and Cython. It has an extensive support for various deep learning libraries and frameworks such as Tensorflow and PyTorch. With SpaCy, you can build complex statistical models for NLP with relative ease. SpaCy is easy to install and use, and proves to be of great help when it comes to large-scale extracting and analyzing of textual information. [box type="info" align="" class="" width=""]Editor's Tip: To know more about how these libraries are used for natural language processing, make sure you check out the book Natural Language Processing with Python Cookbook [/box] Data Visualization Data visualization is a popularly used Data Science technique for visually analysing and communicating information and valuable business insights through graphs, charts, dashboards and reports. Python offers a lot of popular libraries for effective data storytelling. Some of them are listed below: 12. matplotlib matplotlib is the most popular Python library for data visualization which allows for enterprise-grade 2D and 3D plotting. With matplotlib, you can build different kinds of visualizations such as histograms, bar charts, scatter plots and much more, with just a few lines of code. The popularity of matplotlib rivals that of R’s highly acclaimed ggplot2, and deciding which library is better has been a hot topic for debate, for many years now. Matplotlib runs seamlessly on all Python consoles, including iPython and Jupyter notebooks, giving you all the necessary tools to create and share your data visualizations with others. [box type="info" align="" class="" width=""]Editor's Tip: Get started with matplotlib today, with the help of Matplotlib 2.x By Example [/box] 13. Seaborn Seaborn is a Python-based data visualization library, which finds its roots in matplotlib. Apart from offering attractive and insightful data visualizations, seaborn also offers strong support for other Python libraries such as NumPy and pandas. Per the official seaborn page: “If matplotlib “tries to make easy things easy and hard things possible”, seaborn tries to make a well-defined set of hard things easy too.” 14. Bokeh Bokeh is an interactive data visualization library based on Python. It aims to provide D3.js style elegant graphics and visualizations and runs primarily on modern web browsers. Apart from the ability to create a wide variety of visualizations, Bokeh also supports large-scale interactivity and visualizations of real-time datasets. 15. Plotly Plotly is a popularly used Python library which is used across the world for making publication-quality plots and graphs. With Plotly, you can build interactive dashboards, scatter plots, histograms, candlestick charts, heat maps, and a whole host of other data visualizations with ease. With superior interactivity, deployment and publication capabilities, Plotly is used across different domains, majorly finance and geospatial industries for effective data storytelling. So there you have it! Python has an extensive suite of libraries for every data science related task, each equipped with unique features to make the task fast and hassle-free. While there are a lot more Python libraries out there, we cherry-picked these 15 libraries based on their popularity, usefulness and the value they bring to the table. Also, the extensive community support for Python means you can get help for any kind of problem you might come across while using these tools. It's time now for you to go out there and crunch some data with some of these Python powered libraries!
Read more
  • 0
  • 0
  • 48122

article-image-understanding-the-role-aiops-plays-in-the-present-day-it-environment
Guest Contributor
17 Dec 2019
7 min read
Save for later

Understanding the role AIOps plays in the present-day IT environment

Guest Contributor
17 Dec 2019
7 min read
In most conversations surrounding cybersecurity these days, the term “digital transformation,” gets frequently thrown in the mix, especially when the discussion revolves around AIOps. If you’ve got the slightest bit of interest in any recent developments in the cybersecurity world, you might have an idea of what AIOps is. However, if you didn’t already know- AIOps refers to a multi-layered, modern technology platform that allows enterprises to maximize IT operations by integrating AI and machine learning to detect and solve cybersecurity issues as they occur. As the name suggests, AIOps makes use of essential AI technology such as machine learning for the overall improvement of an organization’s IT operations. However, today- the role that AIOps plays has shifted dramatically- which leaves a lot of room for confusion to harbor amongst cybersecurity officers since most enterprises prefer to take the more conventional route as far as AI application is concerned. To utilize the most out of AIOps, enterprises need to understand the significance of the changes in the present-day IT environment, and how those changes influence the AI’s applications. To aid readers in understanding the volatility of the relationship between AI’s applications and the IT environment it is applicable in, we’ve put together an article that dives into the differences between conventional monitoring methods and present-day enterprise needs. Moreover, we’ll also be shining a light on the importance of the adoption of AIOps in enterprises as well. How has the IT environment changed in the modern times? Before we can get into every nook and cranny of why the transition from a traditional approach to a more modern approach matters, we’d like to make one thing very clear. Just because a specific approach works for one organization in no way guarantees that it would work for you. Perhaps the greatest advice any business owner could receive is to plan according to the specific requirements of their security and IT infrastructure. The greatest shortcoming of many CISOs and CSOs is that they fail to understand the particular needs of their IT environment and rely on conventional applications of AI to maximize the overall IT experience. Speaking of traditional AIOps applications, since the number of ‘moving’ parts or components involved was significantly less in number- the involvement of AI was far less complex, and therefore much easier to monitor and control. In a more modern setting, however, with the wave of digitalization and the ever-growing reliance that enterprises have on cloud computing systems, the number of components involved has increased, which also makes understanding the web much more difficult. Bearing witness to the ever-evolving and complex nature of today’s IT environment are the results of the research conducted by Dynatrace. The results explicitly state that something as simple as a single web or mobile application transaction can involve a staggering number of 37 different components or technologies on average. Taking this into account, relying on a traditional approach to AI becomes redundant, and ineffective since the conventional approach relies on an extremely limited understanding and fails to make sense of all the information provided by an arsenal of tools and dashboards. Not only is the conventional approach to AIOps impractical within the modern IT context, but it is also extremely outdated. Having said that perhaps the only approach that fits in the modern-day IT environment is a software intelligence-centric approach, which allows for fast-paced and robust solutions to present-day IT complexities. How important is AIOps for enterprises today? As we’ve already mentioned above, the present-day IT infrastructure requires a drastic change in the relationship that enterprises have had with AIOps so far. For starters, enterprises and organizations need to realize the importance of the role that AIOps plays. Unfortunately, however, there’s an overarching tendency seen in enterprises that enables them the naivety of labeling investing in AIOps as yet another “IT expense.” On the contrary, AIOps is essential for companies and organizations today, since every company is undergoing digitalization, along with increasing their reliance on modern technology more and more. Some cybersecurity specialists might even argue that each company is slowly turning into a software company, primarily because of the rise in cloud-computing systems. AIOps also works on improving the ‘business’ aspect of an enterprise, since the modern consumer looks for enterprises that offer innovative features, along with their ability to enhance user experience through an impeccable and seamless digital experience. Furthermore, in the competitive economic conditions of today, carrying out business operations in a timely manner is critical to an enterprise’s longevity- which is where the integration of AI can help an organization function smoothly. It should also be pointed out that the employment of AIOps opens up new avenues for businesses to step into since it removes the element of fear present in most business owners. The implementation of AIOps also enables an organization to make quick-paced releases since it takes IT problems out of the equation. These problems usually consist of bugs, regulation, and compliance, along with monitoring the overall IT experience being provided to consumers. How can enterprises ensure the longevity of their reliance on AIOps? When it comes to the integration of any new technology into an organization’s routine functions, there are always questions to be asked regarding the impact of the continued reliance on modern technology. To demonstrate the point we’ve made above, let’s return to a tech we’ve referred to throughout the article- cloud computing. Introduced in the 1960s, cloud computing revolutionized data storage to what it is today. However, after a couple of years and some unfortunate cyberattacks launched on cloud storage networks, cybersecurity specialists have found some dire problems with complete dependency on cloud computing storage. Similarly, many cybersecurity specialists and researchers wonder about the negative impacts that a dependency on AIOps could have in the future. When it comes to ensuring enterprises about the longevity of amalgamating AIOps into an enterprise, we’d like to give our assurance through the following reasons: Unlike cloud computing, developments in AIOps are heavily rooted in real-time data fed to the algorithm by an IT team. When you strip down all the fancy IT jargon, the only thing identity you need to trust is that of your IT personnel. Since AIOps relies on smart auto-remediation capabilities, business owners can see an immediate response geared by the employed algorithms. One such way that AIOps deploys auto-remediation strategies is by sending out alerts of any possible issue- the practice of which enables businesses to operate on the “business” side of the spectrum since they’ve got a trustworthy agent to rely on. Conclusion At the end of the article, we can only reinstate what’s been said before, in a thousand different ways- it’s high time that enterprises welcome change in the form of AIOps, instead of resisting it. In the modern age of digitalization, the key differences seen in the modern-day IT landscape should be reason enough for enterprises to be on the lookout for new alternatives to securing their data, and by extension- their companies. Author Bio Rebecca James is an enthusiastic cybersecurity journalist. A creative team leader, editor of PrivacyCrypts. What is AIOps and why is it going to be important? 8 ways Artificial Intelligence can improve DevOps Post-production activities for ensuring and enhancing IT reliability [Tutorial]
Read more
  • 0
  • 0
  • 46758
article-image-top-10-computer-vision-tools
Aaron Lazar
05 Apr 2018
7 min read
Save for later

Top 10 Tools for Computer Vision

Aaron Lazar
05 Apr 2018
7 min read
The adoption of Computer Vision has been steadily picking up pace over the past decade, but there’s been a spike in adoption of various computer vision tools in recent times, thanks to its implementation in fields like IoT, manufacturing, healthcare, security, etc. Computer vision tools have evolved over the years, so much so that computer vision is now also being offered as a service. Moreover, the advancements in hardware like GPUs, as well as machine learning tools and frameworks make computer vision much more powerful in the present day. Major cloud service providers like Google, Microsoft and AWS have all joined the race towards being the developers’ choice. But which tool should you choose? Today I’ll take you through a list of the top tools and will help you understand which one to pick up, based on your need. Computer Vision Tools/Libraries OpenCV: Any post on computer vision is incomplete without the mention of OpenCV. OpenCV is a great performing computer vision tool and it works well with C++ as well as Python. OpenCV is prebuilt with all the necessary techniques and algorithms to perform several image and video processing tasks. It’s quite easy to use and this makes it clearly the most popular computer vision library on the planet! It is multi-platform, allowing you to build applications for Linux, Windows and Android. At the same time, it does have some drawbacks. It gets a bit slow when working through massive data sets or very large images. Moreover, on its own, it doesn’t have GPU support and relies on CUDA for GPU processing. Matlab: Matlab is a great tool for creating image processing applications and is widely used in research. The reason being that Matlab allows quick prototyping. Another interesting aspect is that Matlab code is quite concise, as compared to C++, making it easier to read and debug. It tackles errors before execution by proposing some ways to make the code faster. On the downside, Matlab is a paid tool. Also, it can get quite slow during execution time, if that’s something that concerns you much. Matlab is not your go to tool in an actual production environment, as it was basically built for prototyping and research. AForge.NET/Accord.NET: You’ll be excited to know that image processing is possible even if you’re a C# and .NET developer, thanks to AForge/Accord. It’s a great tool that has a lot of filters and is great for image manipulation and different transforms. The Image Processing Lab allows for filtering capabilities like edge detection and more. AForge is extremely simple to use as all you need to do is adjust parameters from a user interface. Moreover, its processing speeds are quite good. However, AForge doesn’t possess the power and capabilities of other tools like OpenCV, like advanced motion picture analysis or even advanced processing on images. TensorFlow: TensorFlow has been gaining popularity over the past couple of years, owing to its power and ease of use. It lets you bring the power of Deep Learning to computer vision and has some great tools to perform image processing/classification - it’s API-like graph tensor. Moreover, you can make use of the Python API to perform face and expression detection. You can also perform classification using techniques like regression. Tensorflow also allows you to perform computer vision of tremendous magnitudes. One of the main drawbacks of Tensorflow is that it’s extremely resource hungry and can devour a GPU’s capabilities in no time, quite uncalled for. Moreover, if you wanted to learn how to perform image processing with TensorFlow, you’d have to understand what Machine and Deep Learning is, write your own algorithms and then go forward from there. CUDA: CUDA is a platform for parallel computing, invented by NVIDIA. It enables great boosts in computing performance by leveraging the power of GPUs. The CUDA Toolkit includes the NVIDIA Performance Primitives library which is a collection of signal, image, and video processing functions. If you have large images to process, that are GPU intensive, you can choose to use CUDA. CUDA is easy to program and is quite efficient and fast. On the downside, it is extremely high on power consumption and you will find yourself reformulating for memory distribution in parallel tasks. SimpleCV: SimpleCV is a framework for building computer vision applications. It gives you access to a multitude of computer vision tools on the likes of OpenCV, pygame, etc. If you don’t want to get into the depths of image processing and just want to get your work done, this is the tool to get your hands on. If you want to do some quick prototyping, SimpleCV will serve you best. Although, if your intention is to use it in heavy production environments, you cannot expect it to perform on the level of OpenCV. Moreover, the community forum is not very active and you might find yourself running into walls, especially with the installation. GPUImage: GPUImage is a framework or rather, an iOS library that allows you to apply GPU-accelerated effects and filters to images, live motion video, and movies. It is built on OpenGL ES 2.0. Running custom filters on a GPU calls for a lot of code to set up and maintain. GPUImage cuts down on all of that boilerplate and gets the job done for you. Computer Vision as a Service: Google Cloud and Mobile Vision APIs: Google Cloud Vision API enables developers to perform image processing by encapsulating powerful machine learning models in a simple REST API that can be called in an application. Also, its Optical Character Recognition (OCR) functionality enables you to detect text in your images. The Mobile Vision API lets you detect objects in photos and video, using real-time on-device vision technology. It also lets you scan and recognise barcodes and text. Amazon Rekognition: Amazon Rekognition is a deep learning-based image and video analysis service that makes adding image and video analysis to your applications, a piece of cake. The service can identify objects, text, people, scenes and activities, and it can also detect inappropriate content, apart from providing highly accurate facial analysis and facial recognition for sentiment analysis. Microsoft Azure Computer Vision API: Microsoft’s API is quite similar to its peers and allows you to analyse images, read text in them, and analyse video in near-real time. You can also flag adult content, generate thumbnails of images and recognise handwriting. Bonus: SciPy and NumPy: I thought I’d add these in as well, since I’ve seen quite a few developers use Python to build computer vision applications (without OpenCV, that is). SciPy and NumPy are quite powerful enough to perform image processing. scikit-image is a Python package that is dedicated towards image processing, which uses native NumPy and SciPy arrays as image objects. Moreover, you get to use the cool IPython interactive computing environment and you can also choose to include OpenCV if you want to do some more hardcore image processing. Well there you have it, these were the top tools for computer vision and image processing. Head on over and check out these resources, to get working with some of the top tools used in the industry.
Read more
  • 0
  • 2
  • 46603

article-image-what-you-need-to-know-about-generative-adversarial-networks
Guest Contributor
19 Jan 2018
7 min read
Save for later

What you need to know about Generative Adversarial Networks

Guest Contributor
19 Jan 2018
7 min read
[box type="note" align="" class="" width=""]We have come to you with another guest post by Indra den Bakker, an experienced deep learning engineer and a mentor on Udacity for many budding data scientists. Indra has also written one of our best selling titles, Python Deep Learning Cookbook which covers solutions to various problems in modeling deep neural networks.[/box] In 2014, we took a significant step in AI with the introduction of Generative Adversarial Networks -better known as GANs- by Ian Goodfellow, amongst others. The real breakthrough of GANs didn’t follow until 2016, however, the original paper includes many novel ideas that would be exploited in the years to come. Previously, deep learning had already revolutionized many industries by achieving above human performance. However, many critics argued that these deep learning models couldn’t compete with human creativity. With the introduction to GANs, Ian showed that these critics could be wrong. Figure 1: example of style transfer with deep learning The idea behind GANs is to create new examples based on a training set - for example to demonstrate the ability to create new paintings or new handwritten digits. In GANs two competing deep learning models are trained simultaneously. These networks compete against each other: one model tries to generate new realistic examples, this network is also called the generator. The other network tries to classify if an example originates from the training set or from the generator, also called as discriminator. In other words, the generator tries to mislead the discriminator by generating new examples. In the figure below we can see the general structure of GANs. Figure 2: GAN structure with X as training examples and Z as noise input. GANs are fundamentally different from other machine learning applications. The task of a GAN is unsupervised: we try to extract patterns and structure from data without additional information. Therefore, we don’t have a truth label. GANs shouldn’t be confused with autoencoder networks. With autoencoders we know what the output should be: the same as the input. But in case of GANs we try to create new examples that look like the training examples but are different. It’s a new way of teaching an agent to learn complex tasks by imitating an “expert”. If the generator is able to fool the discriminator one could argue that the agent mastered the task - think about the Turing test. Best way to explain GANs is to use images as an example. The resulting output of GANs can be fascinating. The most used dataset for GANs is the popular MNIST dataset. This dataset has been used in many deep learning papers, including the original Generative Adversarial Nets paper. Figure 3: example of MNIST training images Let’s say as input we have a bunch of handwritten digits. We want our model to be able to take these examples and create new handwritten digits. We want our model to learn how to write digits in such a way that it looks like handwritten digits. Note, that we don’t care which digits the model creates as long as it looks like one of the digits from 0 to 9. As you may suspect, there is a thin line between generating examples that are exact copies of the training set and newly created images. We need to make sure that the generator generates new images that follow the distribution of the training examples but are slightly different. This is where the creativity needs to come in. In Figure 2, we’ve showed that the generator uses noise -random values- as input. This noise is random, to make sure that the generator creates different output each time. Now that we know what we need and what we want to achieve, let’s have a closer look at both model architectures. Let’s start with the generator. We will feed the generator with random noise: a vector of 100 values randomly drawn between -1 and 1. Next, we stack multiple fully connected layers with Leaky ReLU activation function. Our training images are in grayscale and are sized as 28x28. Which means, flattened we need an output of 784 units for the final layer of our generator - the output of the generator should match the size of the training images. As activation function for our final layer we will be using TanH to make sure the resulting values are squeezed between -1 and 1. The final model architecture of our generator looks as follows: Figure 4: model architecture of the generator Next, we define our discriminator model. Most common is to use a mirrored version of the generator, where we have as input 784 values and as final layer a fully connected layer with 1 hidden neuron and sigmoid activation function for binary classification. Keep in mind that both the generator and discriminator are trained at the same time. The model looks like this: Figure 5: model architecture of the discriminator In general, generating new images is a harder task. Therefore, sometimes it can be beneficial to train the generator twice for each step. Whereas the discriminator will only be trained once. Another option is to set the learning rate for the discriminator a bit smaller than the learning rate for the generator. Tracking the performance of GANs can be tricky. Sometimes a lower loss doesn’t represent a better output. That’s why it’s a good idea to output the generated images during the training process. In the following figure we can see the digits generated by a GAN after 20 epochs. Figure 6: example output of generated MNIST images As we have stated in the introduction, GANs didn’t get much traction until 2016. GANs were mostly unstable and hard to train. Small adjustments in the model or training parameter resulted in unsatisfying results. Advancements in model architecture and other improvements fixed some of the previous limitations and unlocked the real potential of GANs. An important improvement was introduced by Deep Convolutional GANs (DCGANs). DCGANs is a network architecture, where in both the discriminator and generator are fully convolutional. The output is more stable - for datasets with higher translation invariance, like the Fashion MNIST dataset. Figure 7: example of Fashion MNIST images generated by a Deep Convolutional Generative Adversarial Network (DCGAN) There is so much more to discover with GANs and there is huge potential still to be unlocked. According to Yann LeCun - one of the fathers of deep learning - GANs are the most important advancement in machine learning in the last 20 years. GANs can be used for many different applications, ranging from 3D face generation to upscaling resolution of images and text-to-image. GANs might be the stepping stone we have been waiting for to add creativity to machines. [author title="Author's Bio"]Indra den Bakker is an experienced deep learning engineer and mentor on Udacity. He is the founder of 23insights, a part of NVIDIA's Inception program—a machine learning start-up building solutions that transform the world’s most important industries. For Udacity, he mentors students pursuing a Nanodegree in deep learning and related fields, and he is also responsible for reviewing student projects. Indra has a background in computational intelligence and has worked for several years as a data scientist for IPG Mediabrands and Screen6 before founding 23insights. [/author]      
Read more
  • 0
  • 0
  • 46242