All in One Offer! | Access Unlimited Courses in any category starting at just $29. Offer Ends in:

Browse Library

  • Business Solutions
  • Become an Instructor
  • 0
    Shopping Cart

    Your Cart is empty. Keep shopping to find a course!

    Browse Courses

  • Students
    148
  • Courses
    20
  • Reviews
    11

Courses taught by Anette Prehn

  • Setup Own Asterisk VoIP Server with Android, iOS & Win Apps
    VoIP for Dummies - Asterisk VoIP Server setup with Android, iOS, Win Apps - Using Fully Open Source Server and Clients
     
    In this course, we will setup a VoIP server and the client devices and the clients can make calls in between them using the VoIP server. And no prior experience is required.
     
    VoIP or Voice over Internet Protocol is a technology that allows you to make phone calls across devices without using the normal analogue phone connection. So your calls will be placed across internet and the normal phone lines are not required.
     
    VoIP allows you to make calls from a computer, a mobile phone which is connected to internet, or a normal phone which is connected to a specific adapter called the 'VoIP Adapter'. The major benefit is that since this call is placed over internet, you don't need a separate line or a dedicated line in-order to make the call. Just an internet connection is more than enough to make a call.
     
    If you are a business owner trying to get down the cost of communication at your office, or you are trying to setup a vast call based call centre operation or you are a technical enthusiast who  wants to host your own VoIP server and provide this service to your client users, then this is the course exactly for you.
     
    Let me now give you a brief overview of what are the topics that we are going to cover in this course.
     
    In the first session, which is basically a theory session, we will be covering the technology behind VoIP. And we will be covering the architecture and the working of VoIP technology compared to the normal PSTN (Public Switching Telephone Network) that has been there from the beginning.
     
    And in the next session we will be setting up own own ubuntu based VPS server in an Amazon (AWS) ec2 instance. I am preferring Amazon , because they have a very flexible plan and they have an option called 'Free Tier', which we can use free for one year with a very minimal cost if you are keeping your usage in an optimal state.
     
    And after setting up the VPS server, we will installing Asterisk, which is a very popular open source VoIP server software available, and we will be installing it into our VPS server and we will also configure the ports, the specific number of ports that are required for the client devices to communicate with other client devices through the server.
     
    And in the next session, which is an important session, in which we will be configuring the dial plan and the extensions that we are going to use with our server. We will also make configuration to accept audio calls as well as video calls through our VoIP server.
     
    For our VoIP client applications we will be using an application called Linphone. It is also a completely open source application and the advantage is that it is available for all the platforms. For windows, Linux, Mac, Android and iOS device and the source code is completely open. You can download it and customize it as per your needs.
     
    The same configuration settings that you are going to make in this softphone, do the same configuration if you have a physical, hard wired, IP Phone with you. You can enter all these configuration into that and it will also work in the same way as we configure the soft phone clients.
     
    And in the next session, we will be configuring a windows based softphone, which is our linphone. We will be installing it in our windows and we will configure the option so that it can register with the server and make video and audio calls in between devices.
     
    And later, we will configuring it for Android. The same configurations. We will be installing it from the play store in an android device and we will make the configuration so that video and audio calls can be placed.
     
    And after that, we will have it for iPhone We will directly download it from the iPhone App Store and we will be installing it into our iPhone and then we will try to make calls from iPhone to other devices and we will be testing the video calls as well as audio calls across the devices.
     
    Learning and becoming an expert in VoIP technology is actually a very rewarding career because VoIP technology is very extensive so far and there is some time in future where we will discard all those analogue telephone lines and we will rely completely on VoIP based IP telephones. Because we need only a single internet channel , rather than having multiple channels.
     
    So world is evolving into that kind of a technology and VoIP experts are very much required in the market. And by the end of this course, we will be providing with you an experience certificate (Course Completion Certificate), which you will have great benefits, if you are trying for a VoIP based career.
     
    So see you soon in the class room. Let's make our first call ! Have a great day !!
    (0)
    USD40
  • Practical Blockchain & Smart Contracts : Ethereum & Solidity

    Hello and welcome to The Complete Practical Guide To Blockchain Application Development

     

    If you are going to ask me which is the Information Technology Domain that is going to change the future, without any doubt I would surely say its block chain based decentralized applications and smart contracts.

     

    Put simply, blockchain is a transaction ledger that maintains identical copies across each member computer within a network. The interesting feature is that, once an information is placed inside a block of the block chain, the tampering or changing of it is impossible.

     

    Governments and financial organizations have already invested millions of dollars into blockchain research and development and most of them have already implemented blockchain in financial services and record keeping.

     

    Also blockchain based smart contracts are replacing the conventional paper contracts and other promissory deeds. Smart contract is an electronic contract that can execute itself after the conditions mentioned in the contact is full-filled. Since its also based in blockchain, once created, it cannot be tampered by anyone.

     

    If you are a technology enthusiast or a programmer who wish to integrate blockchain in your applications, this is the right time to get yourself a thorough knowledge about the practical implementation of blockchain.

     

    There are tons of material and books out there explains, the concepts, nuts and bolts of block chain. But only a very few of them explains how it can be practically implemented.

     

    In this course, I have taken extreme care to keep a 30 / 70 percentage balance between the theoretical concepts and the practical implementation respectively.

     

    After this course, you will get a clear idea of how and where to implement block chain in your existing software projects as well as your upcoming project ideas.

     

    Here is the overview of the list of topics that I have included in this course

     

    Before we proceed with the intense practical sessions, we will have the first few secessions in which we discuss the history and the basic concepts of block chain distributed applications and smart contracts.

     

    In Session 1, we will discuss the history of block chain and distributed applications. Also we will check the basic structure of a block. Which is the building block of the block chain, which is a chain of these blocks.

     

    In session 2, we will understand with the help of demographics, how these blocks are linked to form a block chain and the major security measures in blockchain which makes it this much secure.

     

    In session 3, we will have a small discussion about the types of networks and the concept of crypto currencies like bitcoin which is built based on blockchain technology

     

    In session 4, we will have an overview of what is mean by smart contracts based on blockchain and its working. Also we will discuss the various fields of applications of block chain at present and in the future.

     

    And once we have enough basics about the concepts, we will jump directly into our first practical blokchain workshop. In which, we will build a working model of a conceptual blockchain out of simple JavaScript and JavaScript runtime called nodejs. Don't worry if you have only basic JavaScript knowledge, I will explain things very clearly that even a novice can understand and follow.

     

    In session 5, we will prepare our computer by installing and configuring node js and visual studio code which will help us in developing the JavaScript blockchain

     

    In sessions 6 we will create the class for a single block, which we will be using to create the block chain.

     

    And in session 7, we will create our first block in the chain called genesis block.

     

    In session 8, We will and the functionality to add new blocks so that we can create rest of the blocks

     

    In session 9, we will test the block addition mechanism in our in our JavaScript block chain

     

    and in session 10, we will implement hash verification, which is the most important security measure inside our block chain.

     

    In the next session 11, we will include an additional security measure called proof of work to prevent the quick tampering attempts of our JavaScript block chain.

     

    In session 12, we will implement the concept of mining rewards for the minors who mines and validate new blocks for our JavaScript block chain.

     

    And thats all with our simple JavaScript conceptual blockchain application and from the next session on wards we are going to get our hands dirty with some serious business using the actual ethereum block chain network. In the coming sessions, we will create an ethereum based blockchain and deploy our smart contract within the ethereum block chain network. For your information, ethereum is the worlds most popular open source public blockchain platform.

     

    So in session 13, we will be getting our systems ready to build our ethereum block chain by downloading and installing the dependencies which include nodejs, ganache, metamask solidity language, the truffle framework and sublime text editor for coding.

     

    In session 14, we will configure these dependencies and we will do a basic test to make sure everything is ready to proceed with the development.

     

    we will start building our blockchain smart contract project out of solidity and ethereum. This project is a simple contest application, for finding out the best actor, with two contestants Tom and Jerry. We will have to build a block chain based app to find who is getting the maximum viewer support.

     

    In the next session 15, we will create the model and class to use for every contestant in the project and we will include tom and Jerry as contestants and

     

    In session 16, we will fetch the contestant list using web3js, which is the library use to interact with the blockchain.

     

    Since smart contract can be created once and cannot be altered, the testing procedure should be done strictly before deploying it into the blockchain network. If any mistake or bug, the only option you will have is to remove the existing contract and deploy the new one instead, which is very time consuming and inconvenient. So In session 17, the block chain will be tested by using the truffle framework's testing environment by emulating the transaction of creating users.

     

    Till now, all the block chain interaction happens over the node js command line interface and the truffle framework command line. Now its time to make it appear to our customers or on-line users in an attractive web based front end, using a web page. In order to proceed with that, we will create the html part of the front end application in session 18 and in session 19, we will include the JavaScript part of the simple and beautiful html page we created and we will list the contestants in our best actor contest to the public by running the lite-server which comes along with the truffle framework.

     

    In session 20, in addition to the contestants listing functionality, we will add the voting functionality also to our blockchain smart contract.

     

    Since the members and non members of the network may use the voting app, we need to thoroughly check for the conditions and rules we implemented in the smart contract. We will test the functionality till now using the truffle framework testing mechanism in session 21

     

    In session 22, we will include rules and restrictions for our best actor contest. Rules like, a user can vote only once and the user cannot vote for any non existing contestants.

     

    Till now the voting can only be done via a command line. In session 22, in our simple web interface, we will include the functionality to cast vote to any contestant from our web page.

     

     

    And in the final session, we will create an event watch which will listen for the event of voting and once the voting done, it will refresh the page and fetch data from the blockchain network so that the winner of the contest can be known every then and there.

     

    Even though these are sample projects, it will surely give enough insight to your mind about how the blockchain can be included into your web or mobile projects. This course will also give you enough knowledge to get yourself ahead of others in the blockchain race which have already started.

     

    After successful completion of the course, we will provide you with a course completion certificate which will add much value to your career as a developer, software engineer or as a software architect.

     

    So lets jump into the future of technology using blockchain. See you soon in the class room. Wishing you a happy learning.

    (0)
    USD40
  • GPS Tracking - Setup own GPS Server with android & iOS Apps

    GPS Tracking for Dummies - Quick Guide to Setup your own Open Source GPS Server, Android and iOS Clients & Tracking Apps.

     

    In this course, I will take you to a journey where you can configure your own GPS server and configure your clients, so that you can track your client devices using your GPS server. And it doesn't matter if you have no prior experience in this field.

     

    If you are a business owner trying to track your assets or if you are a business owner who is trying to track your employees or may be you are a technical person who wish to start your GPS server as a service and provide this service to your customers this course is for you.

     

    These are the topics that we are going to cover in this course.

     

    • In the first session, we will be covering the concepts and the theory regarding how GPS system works. How the clients can get the GPS signal from the satellite and how the satellites can track the exact position of the client and in the next session we will have our VPS server, our Ubuntu VPS server installed in Amazon Web Service (AWS) ec2 instance. We will also configure the instance to open some port, through which our client devices can communicate with the server, send the server the co-ordinates and all.

     

    • And after setting up the VPS server we will be installing Traccar, which is a completely open source GPS server which is based on Java, and we will be installing it and configure the options inside our server.

     

    • And in the next session, I will install this client application inside one of my Android device. And I will take a walk around my house so that you can see the co-ordinate (device) moving in the screen in the real time.

     

    • And in the coming sessions, we will be configuring the management interface for Android as well as iOS devices. And also we will have a thorough look at the management interface and the various options it has in the web interface also.

     

    • And in the final session I will explain to you the different components that a commercially available GPS module have, which can be fitted into your car or any commercial vehicles.

     

    I am sure that by the end of this course, you will feel a pride of accomplishment, when you see your devices, your

    client devices moving across the screen in the map.

     

    And by the end of this course we will be providing with an experience certificate which will prove a very  high value for your career if you are actually dealing with location based applications.

     

    So see you soon in the class room. Let's start the art of tracking!

    (0)
    USD40
  • Git and GitHub Version Control - The Complete Startup Guide

    Welcome to the complete Git Version Control Start up Guide. You may be programmer, a Content Writer, or an Article Writer, a Novelist... Anyway, you have to deal with lot of contents which you will updating periodically throughout the time. And the problem is that when you want to change something which you have already been done previously then you have to compare the different versions and try to make changes.

     

     

    And if you are a software engineer, you need to work on a single project with multiple team members, may be they are in different locations, may be in different countries, on different continents, you have to work with each other. You will be working on the same project, the same file some times, and that time also managing with different versions of documents will become a big problem, because most of the energy, time and effort will be wasted in managing the documents and comparing them and merging them together to form the final document.

     

    To tackle these problems, Version Control Systems come to rescue.

     

    Version Control Systems are very helpful in tracking the changes that you make to a document, a program or having the changes and files kept in track kept in track between the team members working for a same goal, for the same project. The time, the effort, the energy you spent to have the documents aligned or you want to revert back to the document or you want to collaborate between the team members working on the same project and same document will be very less or even zero. That is you don't have to spend any time and effort for managing the documents between the teams and merging them.

     

    So its very important and very useful to have your project or files kept within a Version Control System, so that you can revert back to any version that you saved before any time. Or even if you revert back, you can go beyond that point and get the latest version of the repository.

     

    In software companies that are dealing with medium scale or large scale projects,the team will be having the project files inside a Version Control System and they will working within the Version Control System. The most popular of the Version Control Systems used around the world is Git.

     

    So we will be dealing with Git's different commands, different functionalities of Git and also GitHub which is most popular hosted git service, which is being used by companies all around the world.

     

    Here is the list of topics that we are going to discuss in this course.

     

    At first there will be an introductory session, where I will be explaining the concepts of git. How git is working and why need git in the first place.

     

     

    In the second session, we will be going ahead and installing git in your computer and how you can configure the git based Bash Shell.

     

    In the third session, we will be initializing a git, we will be dealing with commands like init, add and status.

     

    And in the fourth session, we will learn how to get log of the git, commit the changes to git and then checkout a different version of that particular project using git.

     

    And in the fifth session, we will deal with Branches. How we can manage branches inside agit repository.

     

    And in the sixth session, we will deal with creating a Git Hub account and configuring a Git Hub account and In the next session, we will push all the local repository into the Git Hub server and then we will make some changes in the Git Hub server and we will do the pull back from the git hub server. The changes from the Git Hub Server to our local repository.

     

    And some times there will be conflicts. Two different persons editing the same file and the same

    lines some times and they will try to push the changes to the server. We will deal with the merge conflicts in the next session.

     

    And throughout these sessions, we will be dealing with the commands the git commands which we execute using the git Bash shell.In the ninth session, we will deal with Git GUI, where you don't need to execute the commands by typing it. You can use a Graphical User Interface very easily to do the merging, the push, the pull, and the synchronization with the git server.

     

    And in the final session, we will compare different git service providers. The git server service providers available throughout the internet and compare the plans available with them. The pricing and the plans and we will discuss which one is better and which one is good and all.

     

    So all together this will be a very valuable course for you to get started with the git based version control system and working with Git Hub which is the most popular git service provider in the world.

     

    And while joining a company, of course they will be asking you if you are familiar with working with git environment you can confidently answer them that "yes I am" after doing this course. This will be adding a really good value to your profile.

     

    And we will be providing you with a completion certificate by the end of this course. You can keep it within your profile so that you will be adding more value to your profile. So see you soon in the class room. Let's get started with the git journey. Thank you !!

    (0)
    USD40
  • Complete Ethical Hacking & Penetration Testing for Web Apps

    DISCLAIMER:

    -----------------

    ANY ACTIONS AND OR ACTIVITIES RELATED TO THE MATERIAL CONTAINED WITHIN THIS COURSE IS SOLELY YOUR RESPONSIBILITY. THE MISUSE OF THE INFORMATION IN THIS CAN RESULT IN CRIMINAL CHARGES BROUGHT AGAINST THE PERSONS IN QUESTION. THE INSTRUCTOR OR THE PLATFORM WILL NOT BE HELD RESPONSIBLE IN THE EVENT ANY CRIMINAL CHARGES BE BROUGHT AGAINST ANY INDIVIDUALS MISUSING THE INFORMATION IN THIS COURSE TO BREAK THE LAW.

    Hello and welcome to Web Based Ethical Hacking and Penetration Testing for Beginners. This course is an introduction to your career as a web security expert.

     

    Internet is all around us. We have been using the facilities of internet since a long while and as the internet came in, the cyber-security threat also started to appear. You can hear stories of cyber-attacks day by day in news papers and media.

     

    As the facilities, the easiness and the comfort of using internet based applications, even if its a web application or a mobile application which is using a cloud based API, the chances of getting a cyber attack has also been increased. It has been increased to such a level that we cannot even predict what happens the next day, because hackers are always alert and vigilant and they are looking for a loophole to get into an application and steal your information.

     

    Like the saying " A person knows how to break a lock, can make a good lock !" , because he knows the vulnerabilities, he knows the loop holes and that person can build a good secure application or he can guide the developer to build a good application which is almost secure and which does not have the loop holes that has already been discovered.

     

    So being cyber security professionals or being cyber security enthusiasts , we will deal with the OWASP Top 10 vulnerabilities . OWASP is a community based project, that is Open Web Application Security Project. Periodically they will be updating their list of vulnerabilities. And in this Top 10 list of vulnerabilities we will be having a subset of other vulnerabilities which will be coming under this top 10 vulnerabilities. So we will cover almost 30 kind of most popular vulnerabilities in this course and these vulnerabilities are the common vulnerabilities that is currently in the Cyber World.

     

    Once you get hold of these 30 vulnerabilities, you will be having enough confidence to test a web application or test a cloud based application in an API based application, a mobile application which is using a cloud based API. In every session I am giving you the mitigations, the defensive mechanisms that we can follow to avoid the vulnerability that we discussed in that particular session. So you will be able to suggest the defensive measures to the programmer or to the developer who is developing the web application.

     

    Please make sure you are using these techniques only for Penetration Testing as well as Ethical Hacking and please do not use it for any other illegal purpose or any other un-ethical kind of things.

     

    Cyber-security and Penetration Testing is a very lucrative career. This course is indented for Cyber Security Beginners, with an overview of basic web coding, interested to come into the cyber security world,and also, existing Testers, who are willing to go into the Penetration Testing. People who are interested in Ethical Hacking can also do this course.

     

    In this course, we will be concentrating mainly on how Penetration Testing can be done on web based applications. And it can also be used for mobile based applications because most of the mobile based applications communicate with a cloud based API. The security of this API is actually the security of the mobile application which is using this API. And by the end of this course, we will be providing you with a course completion certificate on-demand, which you can include in your resume and it will be giving very high value to your current profile.

     

    I promise that you are going to have a really thrilling experience doing Penetration Testing and Ethical Hacking. So see you soon in the class room.

    (0)
    USD70
  • Odoo: The complete Master Class: Beginner to Professional

    As we all know, managing a business organization, even if its a very small one, a big one or a medium sized one is a very time consuming task.

     

    An ERP, the Enterprise Resource Planning, is a software, which allows business to manage their process, using a single software to manage all their business process running inside their organization.

     

    Before the introduction of ERP systems, there were separate software(s) used for separate process. For example, there was a sales management system for sales, accounting system for accounts, HR management system for human resource. So it was a very messy kind of environment , before the introduction of ERP system. And now after the introduction of ERP system, everything came under a single roof which is the ERP platform.

     

    Providing ERP to business is a multi-billion dollar business now a days. Many big players like Microsoft and Oracle is already into this. And small companies and medium scale companies are also coming up with their own ERP solutions. There is a great market out there.

     

    And if you are still not into ERP system, then its only a matter of time you have to switch into an ERP system, because the whole world's business is now running with ERP.

     

    And in this course, we will be learning about odoo, which is the best open source ERP system available around the world. Its a very popular opensource ERP system It's been used in most of the business around the world. And it is having a very vast community which can help you in case of any doubts or any clarifications regarding the odoo installation or configuration or any custom requirement that you have in your organization

     

    And before learning odoo, in order to use any ERP solution, you have to learn the basics of business process. How a sales flow works.. how purchase works.. and how accounts in being done in the organization, stuff like that you should have a basic knowledge in order to use your ERP software efficiently in an organization.

     

    In this course, we will be having an overview of the basic business concepts throughout this course and we will be having thorough in-depth sessions in creating a VPS server in an Amazon web service. Then installing odoo inside this VPS server. Then we will have session covering various odoo modules like Accounts, Sales, Purchase, HRMS, E-commerce, Task Management Systems and Website Management System for a business.

     

    And for advanced odoo users and developers we will be having sessions like 'How you can enable development mode in odoo, how you can take a backup of your existing odoo database, just in case... and how you can restore that backup into the system. Then how you can install custom odoo modules which are available in the market into your odoo system. Then troubleshoot these modules and how you can customize the odoo system by adding a new field to your odoo interface and including that field inside reports search.

     

    We will be having a few report customizations also covered inside this course. And finally we will be comparing the community version of odoo and also the enterprise version of odoo so that you can have a basic overview of which are all the modules that are not available in the community version (the open source version) and which are the one that are available inside the enterprise version, which is the premium version.

     

    ERP Administration, Management and Customization is a very rewarding and lucrative career. I promise that this course will be jump starter for your ERP career. And after completing this course, you will be provided with a course completion certificate, which you can attach to your portfolio and it will be adding a very great value for your portfolio.

     

    So see you soon in the class!! Let's start this wonderful journey of odoo ERP system. Have a great day. Bye Bye !!!

    (0)
    USD40
  • The Complete XMPP Course: Chat Server Setup Android/iOS Apps

    Beginners who are curious about the technology behind chat applications and professionals who want to enhance their knowledge in XMPP server and client technology are welcome to have your skills enhanced.
    Also entrepreneurs who wish to start chat server application as a 'Software as a Service'  business model are welcome too.

     

    • Lets start by an Overview of the XMPP protocol which is popular for chat and messaging applications

    • setting up an Amazon Web Service VPS called EC2 with Ubuntu Linux

    • Compare the popular chat servers and install  the Prosody, the light weight, efficient open source chat server

    • Explore the basic configuration options for prosody to get started.

    • Install few additional modules which is needed for file sending etc.

    • Configure SSL certificate for our chat server to enhance the safety and security.

    • Install and configure windows/mac/Linux Chat App called Pidgin (Open Source)

    • Install and configure Android Chat App called Conversations (Open Source)

    • Install and configure iOS Chat App called Chat Secure (Open Source)

     

    1.  

    (0)
    USD40
  • Setup Own VPN Server with Android, iOS, Win & Linux Clients

    Hello and welcome to my Quick Setup Personal VPN  Server Course. !!

    • We are living in this connected world and the fact is that almost all of our personal as well as official information we exchange through the internet through platforms like social media and websites are traceable.

       

    • Because of this sheer amount of data that we exchange through the internet, our on-line privacy should be our top most priority. Organizations, Governments, people, Internet providers .. are all trying to get hold of your information to sell them or use them for marketing and other unwanted purpose. They are trying to monitor you each and every second.

       

    • In this course, we will try to setup our own VPN Server which can help us making our internet traffic secure and safe. Lets have a quick overview about the contents that are included in this course.

       

    • The first session is a theory session in which we will have an overview about the VPN technology. The principle of working behind a VPN network. The Applications of VPN. The dos and don't s while using a VPN etc

       

    • In the next session, we will setup a preconfigured Open VPN instance in Amazon Web Service Cloud platform. We will see how we can start an AWS account and a virtual server using the one year free tier offer provided by Amazon Web Service. Then we will configure the ports and other options for our server.

       

    • In the third session, we will use an SSH Client application to connect to our VPN server. We will use the private key from AWS to access the server via command line.

       

    • In the next session we will configure the DNS server address of the OpenVPN Server. Then we will also create two test users in the VPN Server.

       

    • Since our server setup is complete, In the fifth session, We will try to connect to our VPN server using a windows PC. We will verify the connection by checking the changed IP address and location.

       

    • And in the next session, We will try to connect to our VPN server using a Linux Computer. Here also, we will verify the connection by checking the changed IP address and location.

       

    • And in the coming session, We will connect to our VPN server using a Mac Computer. we will verify the connection by checking the changed IP address and location.

       

    • Then we will proceed with the leading mobile platforms. First we will connect with an android mobile phone. We will verify the connection. Then we will go ahead with connecting using an iPhone and verify the connection.

       

    • In the final session, we will discuss some tips and tricks by which you can save the VPN server resources there by the server expense can be kept to a minimum.

       

    • Over all this is a course which will enable you to setup a quick VPN network. We are not going much deep into the protocol level working of VPN etc. But a very practical setup of a safe and secure VPN network.

       

    • You will also be provided an experience certificate once you finish the course.

       

    • Best wishes with setting up your own private VPN Server. Lets safeguard our privacy online. See you soon in the class room. Have a great time. bye bye.

    (0)
    USD25
  • Docker for Dummies - The Complete Absolute Beginners Guide.
    Hello and welcome to my new course Docker for Dummies
     
    In the beginning of internet and server technology, there was the bare metal server. It was a single computer which hosted a single operating system and on top of that a single web server application.
     
    The quest for better use of hardware lead to another innovation called as the virtualization. It enabled a single bare metal server computer to host multiple guest operating systems which works like separate computers. The technology itself was superb, but the resource and memory usage was high.
     
    In want of more refinement and efficient use of resources, recently came the containerization technology in which a single operating system is divided into multiple containers with very little size and they share the common kernel of the host operating system itself.
     
    We are going to learn about this technology in our Docker for Dummies course.
     
    The first session, is essentially a theory session. We will discuss about the the basics of docker containerization, monolith and microservices and the transition that lead to containerization and its future
     
    Later we will see how we can install docker in various platforms. At first Docker Desktop in windows 10 pro and later the Docker Toolbox in windows Home edition
     
    Then we will proceed with the steps to download and install the docker desktop in mac computers.
     
    And finally we will see how we can install the actual docker, the docker community edition in ubuntu linux. Don't worry if you are not having a linux computer with you. We will also be covering how we can install virtualbox and on top of that install ubuntu linux so that you may use your windows or mac computer itself.
     
    Then we will proceed with the basics of docker. The difference between docker images and containers. Searching and pulling an image from the hub and dealing with the downloaded images.
     
    Later we will run the images we downloaded using the run command and its various options. Containers will be created while we run the images.
     
    And also we will see a recap of the commands already learned and also alternates to the commands we learned. We will also see how we can get more details about the running docker container, manage it, stop and gracefully terminate it if needed. Also we will discuss the various options and use case scenarios for docker run and docker start commands
     
    We will then deal with how to create a dockerfile. IT contains instructions about the custom procedure of creating a docker container we wants so that we don’t have to repeat the commands while we deal with the creation of new containers. We will also create few sample containers using dockerfile.
     
    Later we will see yet another important tool called as the docker compose tool. This is a very handy option in case we want to deal with a multi container application. A single yaml file will take care of all the containers and its configurations that is required by each and every service in the application.
     
    As a project we will be creating a sample web application with two microservices. One in python and one in PHP. We will see how we can sync these together using the docker compose and get the result.
     
    So overall this is a perfect course for a beginner who wants to get his feet wet with containerization technology using docker. Almost all technology companies are moving towards containerization from their existing virtualization infrastructure. So learning this will take you far ahead of others in the race for learning latest technology
     
    We will also be providing you with a course completion certificate so that you can add it later to your portfolio.
     
    Let’s go ahead with this short and wonderful course. See you soon in the class room. Have a great time. Bye
    (0)
    USD40
  • Adobe XD Mobile & Web UX/UI for Dummies: Quick Crash Course!
    Hello and welcome to my new course Adobe XD for Dummies.
     
     
     
    This tutorial is a crash course about how you can start using Adobe XD for your project prototyping.
     
     
     
    In the first session, we will see how we can download and install Adobe XD into your computer.
     
    Then in the next session, we will have a quick overview about the Adobe XD application's user interface.
     
    Then we will proceed with managing the artboards in XD which is where we are creating our individual screen designs. Just like other Design Softwares, XD also will be using a concept of layers while prototyping.
     
    We will deal with layers in the next session.  Then as the next step, we will proceed with basic things like creating basic shapes and manipulating the properties of them.
     
    Then we will try with different text editing options.  And then we will play with colors and gradient color combinations.
     
    Then to make designs more attractive and natural, we can use the various blurring and shadow options available. We can easily duplicate elements, rotate, resize, align and do other translations using Adobe XD.
     
    Also it features boolean operators like Adding, subtracting etc.
     
    Then we will see how we can import assests like images and how we can apply masks to the images or elements. And also we will try shape editing and pen tool to create custom shapes other than the ones that are predefined in the application.
     
    Repetition is a big problem while dealing with screen designs. Adobe XD solves this by using a feature called repeat grids. Also we will see different export options available.
     
    Protyping is the step in Adobe XD by using which we can link the different screens by creating hot spots in screens by which the customer can interact with the prototype.
     
    We will also see how we can include pluggins in the Adobe XD application to increase its capability and add additional functionality Adobe XD .
     
    We will then try to design a quick and easy mobile chat application. We are making use of the sample Adobe XD Documents, called as UI Kits available from the Adobe Website and reusing components to create our app quickly and effectively.
     
    Our app will be having a splash screen which transitions automatically to a login or register screen. We will design these screens by using most of the UI Kit elements.
     
    Then later we will design a chat listing screen where all chats will be listed and a conversations screen for individual chats. We will also create an overlay menu just like the mobile apps are having now a days.
     
    Also we will link together these screens using the prototyping options available.
     
    After that we will go ahead with a simple website design. We will be designing a university website. We will first design a Home Screen. Here also we can create it quickly as we are reusing the elements from the web design UI Kit.
     
    Then we will create an About us screen to have the contents and a Contact us screen. Later we will link the screen together so that the user can interact with it.
     
     
     
    Overall this is a quick and easy crash course which enables you to learn Adobe XD in only few hours. There will also be a course completion certificate provided at the end of this course to include in your portfolio. So be ready to create stunning prototypes and impress your clients. See you soon in the class room.
    (0)
    USD125
  • Deep Learning & Neural Networks Python - Keras : For Dummies.
    Hi this is Abhilash Nelson and I am thrilled to introduce you to my new course Deep Learning and Neural Networks using Python: For Dummies
     
     
     
    The world has been revolving much around the terms "Machine Learning" and "Deep Learning" recently. With or without our knowledge every day we are using these technologies. Ranging from google suggestions, translations, ads, movie recommendations, friend suggestions, sales and customer experience so on and so forth. There are tons of other applications too. No wonder why "Deep Learning" and "Machine Learning along with Data Science" are the most sought after talent in the technology world now a days.
     
     
     
    But the problem is that, when you think about learning these technologies, a misconception that lots of maths, statistics, complex algorithms and formulas needs to be studied prior to that. Its just like someone tries to make you believe that, you should learn the working of an Internal Combustion engine before you learn how to drive a car. The fact is that, to drive a car, we just only need to know how to use the user friendly control pedals extending from engine like clutch, brake, accelerator, steering wheel etc. And with a bit of experience, you can easily drive a car.
     
     
     
    The basic know how about the internal working of the engine is of course an added advantage while driving a car, but its not mandatory. Just like that, in our deep learning course, we have a perfect balance between learning the basic concepts along the implementation of the built in Deep Learning Classes and functions from the Keras Library using the Python Programming Language. These classes, functions and APIs are just like the control pedals from the car engine, which we can use easily to build an efficient deep learning model.
     
     
     
    Lets now see how this course is organized and an overview about the list of topics included.
     
     
     
    We will be starting with few theory sessions in which we will see an overview about the Deep Learning and neural networks. The difference between deep learning and machine learning, the history of neural networks, the basic work-flow of deep learning, biological and artificial neurons and applications of neural networks.
     
     
     
    In the next session, we will try to answer the most popular , yet confusing question weather we have to choose Deep Learning or machine learning for an upcoming project involving Artificial intelligence. We will compare the scenarios and factors which help us to decide in between machine learning or deep learning.
     
     
     
    And then we will prepare the computer and install the python environment for doing our deep learning coding. We will install the anaconda platform, which a most popular python platform and also install the necessary dependencies to proceed with the course.
     
     
     
    Once we have our computer ready, we will learn the basics of python language which could help if you are new to python and get familiar with the basic syntax of python which will help with the projects in our course. We will cover the details about python assignments, flow control, functions, data structures etc.
     
     
     
    Later we will install the libraries for our projects like Theano, Tensorflow and Keras which are the best and most popular deep learning libraries. We will try a sample program with each libraries to make sure its working fine and also learn how to switch between them.
     
     
     
    Then we will have another theory session in which we will learn the concept of Multi-Layer perceptrons, which is the basic element of the deep learning neural network and then the terminology and the Major steps associated with Training a Neural Network. We will discuss those steps in details in this session.
     
     
     
    After all these exhaustive basics and concepts, we will now move on to creating real-world deep learning models.
     
     
     
    At first we will download and use the Pima Indians Onset of Diabetes Dataset, with the training data of Pima Indians and whether they had an onset of diabetes within five years. We will build a classification model with this and later will train the model and evaluate the accuracy of the model. We will also try Manual and automatic data splitting and k-Fold Cross Validation with this model
     
     
     
    The next dataset we are going to use is the Iris Flowers Classification Dataset, which contains the classification of iris flowers into 3 species based on their petal and sepal dimensions. This is a multi class dataset and we will build a multi-classification model with this and will train the model and try to evaluate the accuracy.
     
     
     
    The next dataset is the  Sonar Returns Dataset, which contains the data about the strength of sonar signals returns and classification weather it was reflected by a rock or any metal like mines under the sea bed. we will build the base model and will evaluate the accuracy. Also we will try to Improve Performance of model With Data Preparation technique like standardization and also by changing the topology of the neural network. By making it deeper or shallow.
     
     
     
    We will also use the Boston House Prices dataset. Unlike the previous ones, this is a regression dataset which uses different factors to determine the average cost of owning a house in the city of Boston. For this one also we will build the model and try to Improve Performance of model With Data Preparation technique like standardization and also by changing the topology of the neural network.
     
     
     
    As we have spend our valuable time designing and train the model, we need to save it to use it for doing predictions later. We will see how we can save the already trained model structure to either json or a yaml file along with the weights as an hdf5 file. Then we will load it and convert it back to a live model. We will try this for all the data sets we learned so far.
     
     
     
    Now the most awaited magic of Deep Learning. Our Genius Multi-Layer Perceptron models will make predictions for custom input data from the already learned knowledge they have. The pima Indian model will predict weather I will get diabetes in the future by analysing my actual health statistics. Then the next model, the Iris Flower model will predict correct species of the newly blossomed Iris flower in my garden.
     
     
     
    Also the prediction will be done with the Sonar Returns Model to check if the data provided matches either a mine or a rock under the sea.
     
     
     
    Then with our next Multi-Layer Perceptron model, the Boston House Price model will predict the median value of the cost of housing in Boston.
     
     
     
    Large deep learning models may take days or even weeks to complete the training. Its a long running process. There is a great chance that some interruptions may occur in between and all our hard work till then will be lost. In order to prevent that, we have a feature called Check-pointing. We can safely mark checkpoints and keep them safe and load model from that point at a later time. Check-pointing can be done based on  every improvement to a model during training or the best instance of model during training.
     
     
     
    At times, we may need to supervise and take a look at how the model is doing while its getting trained. We can Access Model Training History in Keras very easily and if needed can visualize the progress using a graphical representation.
     
     
     
    Then we will deal with a major problem in Deep Learning called Over-fitting. Some neurons in the network gain more weightage gradually and will contribute to incorrect results. We will learn how to include drop-out regularization technique to prevent this to both visible as well as hidden layers
     
     
     
    We can control the learning rate of a model. Just like we do rigorous learning at first and by the end of lesson, we could slow down the pace to understand better, we will also configure and evaluate a time-based as well as  drop-based learning rate scheduler for our new model called Ionosphere classification model.
     
     
     
    In the sessions that follow, we will learn a powerful deep learning neural network technique called Convolutional Neural Networks. This is proved very efficient in dealing with difficult computer vision and natural language processing tasks where the normal nerual network architecture would fail.
     
     
     
    In the following sessions, at first we will have an overview about the convolutional neural networks or CNNs. How it works and its architecture. Then we will proceed with some popular and interesting experiments with the convolutional neural network.
     
     
     
    The major capability of deep learning techniques is object recognition in image data. We will build a CNN model in keras to recognize hand written digits. We will be using the openly available MNIST dataset for this purpose. We will at first build a Multi-Layer Perceptron based Neural Network at first for MNIST dataset and later will upgrade that to Convolutional Neural Network.
     
     
     
    And you know what... we are bold enough to do prediction with a hand written digit using our MNIST dataset. We will take time to train the model, save it. And later load it and do a quick prediction with the already saved model.
     
     
     
    We will later try improving the performance of the model by making the network large. We will also try techniques like Image Augmentation, Sample Standardization, ZCA whitening, transformations like Random rotations, random shifts and flips to our augmented images. And we will finally save the augmented images as the dataset for later use.
     
     
     
    Then we will go ahead with another important and challenging project using CNN which is the Object Recognition in Photographs. We will use another openly available dataset called CIFAR-10. We will learn about the CIFAR-10 object recognition dataset and how to load and use it in Keras. We will at first create a simple Convolutional Neural Network for object recognition. Then later will try to improve the performance using a more deeper network. One more time we are having the guts to do a real time prediction with the CIFAR-10 dataset Convolutional Neural network, where the model will identify a cat and dog from the image we supplied to the system.
     
     
     
    Overall, this is a basic to advanced crash course in deep learning neural networks and convolutional neural networks using Keras and Python, which I am sure once you completed will sky rocket your current career prospects as this is the most wanted skill now a days and of course this is the technology of the future. We will also be providing you with an experience certificate after the completion of this course as a proof of your expertise and you may attach it with your portfolio.
     
     
     
    There is a day in the near future itself, when the deep learning models will out perform human intelligence. So be ready and lets dive into the world of thinking machines.
     
     
     
    See you soon in the class room. Bye for now.
    (0)
    USD150
  • Complete Python Machine Learning & Data Science for Dummies.

    Hi.. Hello and welcome to my new course, Machine Learning with Python for Dummies. We will discuss about the overview of the course and the contents included in this course.

     

    Artificial Intelligence, Machine Learning and Deep Learning Neural Networks are the most used terms now a days in the technology world. It’s also the most mis-understood and confused terms too.

     

    Artificial Intelligence is a broad spectrum of science which tries to make machines intelligent like humans. Machine Learning and Neural Networks are two subsets that comes under this vast machine learning platform

     

    Let’s check what machine is learning now. Just like we human babies, we were actually in our learning phase then. We learned how to crawl, stand, walk, then speak words, then make simple sentences. We learned from our experiences. We had many trials and errors before we learned how to walk and talk. The best trials for walking and talking which gave positive results were kept in our memory and made use later. This process is highly compared to a Machine Learning Mechanism

     

    Then we grew young and started thinking logically about many things, had emotional feelings, etc. We kept on thinking and found solutions to problems in our daily life. That's what the Deep Learning Neural Network Scientists are trying to achieve. A thinking machine.

     

    But in this course we are focusing mainly in Machine Learning. Throughout this course, we are preparing our machine to make it ready for a prediction test. It’s Just like how you prepare for your Mathematics Test in school or college.  We learn and train ourselves by solving the most possible number of similar mathematical problems. Let’s call these sample data of similar problems and their solutions as the 'Training Input' and 'Training Output' Respectively. And then the day comes when we have the actual test. We will be given new set of problems to solve, but very similar to the problems we learned, and based on the previous practice and learning experiences, we have to solve them. We can call those problems as 'Testing Input' and our answers as 'Predicted Output'. Later, our professor will evaluate these answers and compare it with its actual answers, we call the actual answers as 'Test Output'. Then a mark will be given on basis of the correct answers. We call this mark as our 'Accuracy'. The life of a machine learning engineer and a data-scientist is dedicated to make this accuracy as good as possible through different techniques and evaluation measures.

     

    Here are the major topics that are included in this course. We are using Python as our programming language. Python is a great tool for the development of programs which perform data analysis and prediction. It has tons of classes and features which perform the complex mathematical analysis and give solutions in simple one or two lines of code so that we don't have to be a statistic genius or mathematical Nerd to learn data science and machine learning. Python really makes things easy.

     

    These are the main topics that are included in our course

     

    System and Environment preparation

    -----------------------------------

    Installing Python and Required Libraries (Anaconda)

     

    Basics of python and sci-py

    ---------------------------

    Python, Numpy , Matplotlib and Pandas Quick Courses

     

    Load data set from csv / url

    -----------------------------

    Load CSV data with Python, NumPY and Pandas

     

    Summarize data with description

    --------------------------------

    Peeking data, Data Dimensions, Data Types, Statistics, Class Distribution, Attribute Correlations, Univariate Skew

     

    Summarize data with visualization

    -----------------------------------

    Univariate, Multivariate Plots

     

    Prepare data

    -------------

    Data Transforms, Rescaling, Standardizing, Normalizing and Binarization

     

    Feature selection – Automatic selection techniques

    -----------------------------------

    Univariate Selection, Recursive Feature Elimination, Principle Component Analysis and Feature Importance

     

    Machine Learning Algorithm Evaluation

    -----------------------------------

    Train and Test Sets, K-fold Cross Validation, Leave One Out Cross Validation, Repeated Random Test-Train Splits.

     

    Algorithm Evaluation Metrics

    -----------------------------

    Classification Metrics - Classification Accuracy, Logarithmic Loss, Area Under ROC Curve, Confusion Matrix, Classification Report.

    Regression Metrics - Mean Absolute Error, Mean Squared Error, R 2.

     

    Spot-Checking Classification Algorithms

    -----------------------------------

    Linear Algorithms -  Logistic Regression, Linear Discriminant Analysis.

    Non-Linear Algorithms - k-Nearest Neighbours, Naive Bayes, Classification and Regression Trees, Support Vector Machines.

     

    Spot-Checking Regression Algorithms

    -----------------------------------

    Linear Algorithms -   Linear Regression, Ridge Regression, LASSO Linear Regression and Elastic Net Regression.

    Non-Linear Algorithms - k-Nearest Neighbours, Classification and Regression Trees, Support Vector Machines.

     

    Choose the Best Machine Learning Model

    -----------------------------------

    Compare Logistic Regression, Linear Discriminant Analysis, k-Nearest Neighbours, Classification and Regression Trees, Naive Bayes, Support Vector Machines.

     

    Automate and Combine Workflows with Pipeline

    -----------------------------------

    Data Preparation and Modelling Pipeline

    Feature Extraction and Modelling Pipeline

     

    Performance Improvement with Ensembles

    -----------------------------------

    Voting Ensemble

    Bagging: Bagged Decision Trees, Random Forest, Extra Trees

    Boosting: AdaBoost, Gradient Boosting

     

    Performance Improvement with Algorithm Parameter Tuning

    --------------------------------------------------------

    Grid Search Parameter

    Random Search Parameter Tuning

     

    Save and Load (serialize and deserialize) Machine Learning Models

    -----------------------------------

    Using pickle

    Using Joblib

     

    finalize a machine learning project

    -----------------------------------

    steps For Finalizing classification models - pima indian dataset

    Dealing with imbalanced class problem

    steps For Finalizing multi class models - iris flower dataset

    steps For Finalizing regression models - boston housing dataset

     

    Predictions and Case Studies

    ----------------------------

    Case study 1: predictions using the Pima Indian Diabetes Dataset

    Case study: Iris Flower Multi Class Dataset

    Case study 2: the Boston Housing cost Dataset

     

    Machine Learning and Data Science is the most lucrative job in the technology arena now a days. Learning this course will make you equipped to compete in this area.

     

    Best wishes with your learning. See you soon in the class room.

    (0)
    USD150
  • Python Data Science basics with Numpy, Pandas and Matplotlib.
    Welcome to my new course Python Essentials with Pandas and Numpy for Data Science
     
     
     
    In this course, we will learn the basics of Python Data Structures and the most important Data Science libraries like NumPy and Pandas with step by step examples!
     
     
     
    The first session will be a theory session in which, we will have an introduction to python, its applications and the libraries.
     
     
     
    In the next session, we will proceed with installing python in your computer. We will install and configure anaconda which is a platform you can use for quick and easy installation of python and its libraries. We will get ourselves familiar with Jupiter notebook, which is the IDE that we are using throughout this course for python coding.
     
     
     
    Then we will go ahead with the basic python data types like strings, numbers and its operations. We will deal with different types of ways to assign and access strings, string slicing, replacement, concatenation, formatting and f strings.
     
     
     
    Dealing with numbers, we will discuss the assignment, accessing and different operations with integers and floats. The operations include basic ones and also advanced ones like exponents. Also we will check the order of operations, increments and decrements, rounding values and type casting.
     
     
     
    Then we will proceed with basic data structures in python like Lists tuples and set. For lists, we will try different assignment, access and slicing options. Along with popular list methods, we will also see list extension, removal, reversing, sorting, min and max, existence check , list looping, slicing, and also inter-conversion of list and strings.
     
     
     
    For Tuples also we will do the assignment and access options and the proceed with different options with set in python.
     
     
     
    After that, we will deal with python dictionaries. Different assignment and access methods. Value update and delete methods and also looping through the values in the dictionary.
     
     
     
    And after learning all of these basic data types and data structures, its time for us to proceed with the popular libraries for data-science in python. We will start with the NumPy library. We will check different ways to create a new NumPy array, reshaping , transforming list to arrays, zero arrays and one arrays, different array operations, array indexing, slicing, copying. we will also deal with creating and reshaping multi dimensional NumPy arrays, array transpose, and statistical operations like mean variance etc using NumPy
     
     
     
    Later we will go ahead with the next popular python library called Pandas. At first we will deal with the one dimensional labelled array in pandas called as the series.  We will create assign and access the series using different methods.
     
     
     
    Then will go ahead with the Pandas Data frames, which is a 2-dimensional labelled data structure with columns of potentially different types. We will convert NumPy arrays and also pandas series to data frames. We will try column wise and row wise access options, dropping rows and columns, getting the summary of data frames with methods like min, max etc. Also we will convert a python dictionary into a pandas data frame. In large datasets, its common to have empty or missing data. We will see how we can manage missing data within dataframes. We will see sorting and indexing operations for data frames.
     
     
     
    Most times, external data will be coming in either a CSV file or a JSON file. We will check how we can import CSV and JSON file data as a dataframe so that we can do the operations and later convert this data frame to either CSV and json objects and write it into the respective files. 
     
     
     
    Also we will see how we can concatenate, join and merge two pandas data frames. Then we will deal with data stacking and pivoting using the data frame and also to deal with duplicate values within the data-frame and to remove them selectively.
     
     
     
    We can group data within a data-frame using group by methods for pandas data frame. We will check the steps we need to follow for grouping. Similarly we can do aggregation of data in the data-frame using different methods available and also using custom functions. We will also see other grouping techniques like Binning and bucketing based on data in the data-frame
     
     
     
    At times we may need to use custom indexing for our dataframe. We will see methods to re-index rows and columns of a dataframe and also rename column indexes and rows. We will also check methods to do collective replacement of values in a dataframe and also to find the count of all or unique values in a dataframe.
     
     
     
    Then we will proceed with implementing random permutation using both the NumPy and Pandas library and the steps to follow. Since an excelsheet and a dataframe are similar 2d arrays, we will see how we can load values in a dataframe from an excelsheet by parsing it. Then we will do condition based selection of values in a dataframe, also by using lambda functions and also finding rank based on columns.
     
     
     
    Then we will go ahead with cross Tabulation of our dataframe using contingency tables. The steps we need to proceed with to create the cross tabulation contingency table.
     
     
     
    After all these operations in the data we have, now its time to visualize the data. We will do exercises in which we can generate graphs and plots. We will be using another popular python library called Matplotlib to generate graphs and plots. We will do tweaking of the grpahs and plots by adjusting the plot types, its parameters, labels, titles etc.
     
     
     
    Then we will use another visualization option called histogram which can be used to groups numbers into ranges. We will also be trying different options provided by matplotlib library for histogram
     
     
     
    Overall this course is a perfect starter pack for your long journey ahead with big data and machine learning. You will also be getting an experience certificate after the completion of the course(only if your learning platform supports)
     
     
     
    So lets start with the lessons. See you soon in the class room.
    (0)
    USD70
  • Python Data Science basics with Numpy, Pandas and Matplotlib.
    Welcome to my new course Python Essentials with Pandas and Numpy for Data Science
     
     
     
    In this course, we will learn the basics of Python Data Structures and the most important Data Science libraries like NumPy and Pandas with step by step examples!
     
     
     
    The first session will be a theory session in which, we will have an introduction to python, its applications and the libraries.
     
     
     
    In the next session, we will proceed with installing python in your computer. We will install and configure anaconda which is a platform you can use for quick and easy installation of python and its libraries. We will get ourselves familiar with Jupiter notebook, which is the IDE that we are using throughout this course for python coding.
     
     
     
    Then we will go ahead with the basic python data types like strings, numbers and its operations. We will deal with different types of ways to assign and access strings, string slicing, replacement, concatenation, formatting and f strings.
     
     
     
    Dealing with numbers, we will discuss the assignment, accessing and different operations with integers and floats. The operations include basic ones and also advanced ones like exponents. Also we will check the order of operations, increments and decrements, rounding values and type casting.
     
     
     
    Then we will proceed with basic data structures in python like Lists tuples and set. For lists, we will try different assignment, access and slicing options. Along with popular list methods, we will also see list extension, removal, reversing, sorting, min and max, existence check , list looping, slicing, and also inter-conversion of list and strings.
     
     
     
    For Tuples also we will do the assignment and access options and the proceed with different options with set in python.
     
     
     
    After that, we will deal with python dictionaries. Different assignment and access methods. Value update and delete methods and also looping through the values in the dictionary.
     
     
     
    And after learning all of these basic data types and data structures, its time for us to proceed with the popular libraries for data-science in python. We will start with the NumPy library. We will check different ways to create a new NumPy array, reshaping , transforming list to arrays, zero arrays and one arrays, different array operations, array indexing, slicing, copying. we will also deal with creating and reshaping multi dimensional NumPy arrays, array transpose, and statistical operations like mean variance etc using NumPy
     
     
     
    Later we will go ahead with the next popular python library called Pandas. At first we will deal with the one dimensional labelled array in pandas called as the series.  We will create assign and access the series using different methods.
     
     
     
    Then will go ahead with the Pandas Data frames, which is a 2-dimensional labelled data structure with columns of potentially different types. We will convert NumPy arrays and also pandas series to data frames. We will try column wise and row wise access options, dropping rows and columns, getting the summary of data frames with methods like min, max etc. Also we will convert a python dictionary into a pandas data frame. In large datasets, its common to have empty or missing data. We will see how we can manage missing data within dataframes. We will see sorting and indexing operations for data frames.
     
     
     
    Most times, external data will be coming in either a CSV file or a JSON file. We will check how we can import CSV and JSON file data as a dataframe so that we can do the operations and later convert this data frame to either CSV and json objects and write it into the respective files. 
     
     
     
    Also we will see how we can concatenate, join and merge two pandas data frames. Then we will deal with data stacking and pivoting using the data frame and also to deal with duplicate values within the data-frame and to remove them selectively.
     
     
     
    We can group data within a data-frame using group by methods for pandas data frame. We will check the steps we need to follow for grouping. Similarly we can do aggregation of data in the data-frame using different methods available and also using custom functions. We will also see other grouping techniques like Binning and bucketing based on data in the data-frame
     
     
     
    At times we may need to use custom indexing for our dataframe. We will see methods to re-index rows and columns of a dataframe and also rename column indexes and rows. We will also check methods to do collective replacement of values in a dataframe and also to find the count of all or unique values in a dataframe.
     
     
     
    Then we will proceed with implementing random permutation using both the NumPy and Pandas library and the steps to follow. Since an excelsheet and a dataframe are similar 2d arrays, we will see how we can load values in a dataframe from an excelsheet by parsing it. Then we will do condition based selection of values in a dataframe, also by using lambda functions and also finding rank based on columns.
     
     
     
    Then we will go ahead with cross Tabulation of our dataframe using contingency tables. The steps we need to proceed with to create the cross tabulation contingency table.
     
     
     
    After all these operations in the data we have, now its time to visualize the data. We will do exercises in which we can generate graphs and plots. We will be using another popular python library called Matplotlib to generate graphs and plots. We will do tweaking of the grpahs and plots by adjusting the plot types, its parameters, labels, titles etc.
     
     
     
    Then we will use another visualization option called histogram which can be used to groups numbers into ranges. We will also be trying different options provided by matplotlib library for histogram
     
     
     
    Overall this course is a perfect starter pack for your long journey ahead with big data and machine learning. You will also be getting an experience certificate after the completion of the course(only if your learning platform supports)
     
     
     
    So lets start with the lessons. See you soon in the class room.
    (0)
    USD70
  • pfSense for Dummies : Setup and Configure your own firewall.
    Welcome to my new course Python Essentials with Pandas and Numpy for Data Science
     
     
     
    In this course, we will learn the basics of Python Data Structures and the most important Data Science libraries like NumPy and Pandas with step by step examples!
     
     
     
    The first session will be a theory session in which, we will have an introduction to python, its applications and the libraries.
     
     
     
    In the next session, we will proceed with installing python in your computer. We will install and configure anaconda which is a platform you can use for quick and easy installation of python and its libraries. We will get ourselves familiar with Jupiter notebook, which is the IDE that we are using throughout this course for python coding.
     
     
     
    Then we will go ahead with the basic python data types like strings, numbers and its operations. We will deal with different types of ways to assign and access strings, string slicing, replacement, concatenation, formatting and f strings.
     
     
     
    Dealing with numbers, we will discuss the assignment, accessing and different operations with integers and floats. The operations include basic ones and also advanced ones like exponents. Also we will check the order of operations, increments and decrements, rounding values and type casting.
     
     
     
    Then we will proceed with basic data structures in python like Lists tuples and set. For lists, we will try different assignment, access and slicing options. Along with popular list methods, we will also see list extension, removal, reversing, sorting, min and max, existence check , list looping, slicing, and also inter-conversion of list and strings.
     
     
     
    For Tuples also we will do the assignment and access options and the proceed with different options with set in python.
     
     
     
    After that, we will deal with python dictionaries. Different assignment and access methods. Value update and delete methods and also looping through the values in the dictionary.
     
     
     
    And after learning all of these basic data types and data structures, its time for us to proceed with the popular libraries for data-science in python. We will start with the NumPy library. We will check different ways to create a new NumPy array, reshaping , transforming list to arrays, zero arrays and one arrays, different array operations, array indexing, slicing, copying. we will also deal with creating and reshaping multi dimensional NumPy arrays, array transpose, and statistical operations like mean variance etc using NumPy
     
     
     
    Later we will go ahead with the next popular python library called Pandas. At first we will deal with the one dimensional labelled array in pandas called as the series.  We will create assign and access the series using different methods.
     
     
     
    Then will go ahead with the Pandas Data frames, which is a 2-dimensional labelled data structure with columns of potentially different types. We will convert NumPy arrays and also pandas series to data frames. We will try column wise and row wise access options, dropping rows and columns, getting the summary of data frames with methods like min, max etc. Also we will convert a python dictionary into a pandas data frame. In large datasets, its common to have empty or missing data. We will see how we can manage missing data within dataframes. We will see sorting and indexing operations for data frames.
     
     
     
    Most times, external data will be coming in either a CSV file or a JSON file. We will check how we can import CSV and JSON file data as a dataframe so that we can do the operations and later convert this data frame to either CSV and json objects and write it into the respective files. 
     
     
     
    Also we will see how we can concatenate, join and merge two pandas data frames. Then we will deal with data stacking and pivoting using the data frame and also to deal with duplicate values within the data-frame and to remove them selectively.
     
     
     
    We can group data within a data-frame using group by methods for pandas data frame. We will check the steps we need to follow for grouping. Similarly we can do aggregation of data in the data-frame using different methods available and also using custom functions. We will also see other grouping techniques like Binning and bucketing based on data in the data-frame
     
     
     
    At times we may need to use custom indexing for our dataframe. We will see methods to re-index rows and columns of a dataframe and also rename column indexes and rows. We will also check methods to do collective replacement of values in a dataframe and also to find the count of all or unique values in a dataframe.
     
     
     
    Then we will proceed with implementing random permutation using both the NumPy and Pandas library and the steps to follow. Since an excelsheet and a dataframe are similar 2d arrays, we will see how we can load values in a dataframe from an excelsheet by parsing it. Then we will do condition based selection of values in a dataframe, also by using lambda functions and also finding rank based on columns.
     
     
     
    Then we will go ahead with cross Tabulation of our dataframe using contingency tables. The steps we need to proceed with to create the cross tabulation contingency table.
     
     
     
    After all these operations in the data we have, now its time to visualize the data. We will do exercises in which we can generate graphs and plots. We will be using another popular python library called Matplotlib to generate graphs and plots. We will do tweaking of the grpahs and plots by adjusting the plot types, its parameters, labels, titles etc.
     
     
     
    Then we will use another visualization option called histogram which can be used to groups numbers into ranges. We will also be trying different options provided by matplotlib library for histogram
     
     
     
    Overall this course is a perfect starter pack for your long journey ahead with big data and machine learning. You will also be getting an experience certificate after the completion of the course(only if your learning platform supports)
     
     
     
    So lets start with the lessons. See you soon in the class room.
    (0)
    USD120
  • OpenCV Complete Dummies Guide to Computer Vision with Python
    Hello and let me welcome you to the magical world of Computer Vision.
     
     
     
    Computer Vision is an AI based, that is, Artificial Intelligence based technology that allows computers to understand and label images. Its now used in Convenience stores, Driver-less Car Testing, Security Access Mechanisms, Policing and Investigations Surveillance, Daily Medical Diagnosis monitoring health of crops and live stock and so on and so forth..
     
     
     
    Even to analyze data coming from outer space stars, planets etc also we use Computer Vision.
     
     
     
    A common example will be face detection and unlocking mechanism that you use in your mobile phone. We use that daily. That is also a big application of Computer Vision. And today, top technology companies like Amazon, Google, Microsoft, Facebook etc are investing millions and millions of Dollars into Computer Vision based research and product development.
     
     
     
    So.. Learning and mastering this fantastic world of Computer Vision based technology is surely up-market and it will make you proficient in competing with the swiftly changing Image Processing technology arena.
     
     
     
    And this course is designed in such a way that even the very beginner to programming can master the Computer Vision based technology.
     
     
     
    Here are the major topics that we are going to cover in this course.
     
     
     
    Session 1: Introduction to OpenCV
     
    ----------------------------------
     
    We will mainly concentrating on OpenCV, which is the open source Computer Vision library.
     
    Its being used all around the world in providing Computer Vision based Technology. So we will have an introduction to OpenCV, the features, the version, and all the other details, we will be discussing in the first session.
     
     
     
    Session 2: Installing Virtual Box and Ubuntu 18
     
    -----------------------------------------------
     
    In the second session, we will be installing a Virtual Box with Ubuntu 18 the latest version of Ubuntu Linux in it, so that we can have our Computer Vision based laboratory setup separately, rather than we need to install all the packages and everything into our computer or laptop that you are using daily. So its better to have a separate Lab setup so that we can play and get our hands dirty with the Computer Vision based programs, examples exercises and all.
     
     
     
    Session 3: Installing Libraries and Dependencies
     
    -----------------------------------------------
     
    And in the third session, we will be installing the libraries that are required for Computer Vision Programming. We will be mainly using Python program. Python is actually the language which is mainly used for scientific... these kind of research purpose and all.. So the best combination will be Python - OpenCV and  Linux. The best ever combination to run our OpenCV based Computer Vision programs.
     
     
     
    Session 4: Installing Sublime Text Editor for Ubuntu
     
    ---------------------------------------------------
     
    And in the next session, we will set up our IDE, which is Sublime Text, we will install
     
    and configure the Sublime Text inside our Ubuntu Virtual Machine.
     
     
     
    Session 5: Image Processing Concepts
     
    ------------------------------------
     
    In the next session, which is a Theory session, we will have the concepts, the Pixels, Size, Image and all the concepts that are based on Image Processing.
     
     
     
    Session 6: OpenCV: Read Load and Save Image - Sample Program
     
    ------------------------------------------------------------
     
    Then in the next session, we will use the OpenCV. We will run a simple example of OpenCV to load an image, then show that image using the Image Viewer feature of OpenCV and we will save that image in a separate format.
     
     
     
    Session 7: OpenCV Pixel and Area Manipulation
     
    ---------------------------------------------
     
    Then in the next session, we will manipulate the image based on its pixels, that is the finest element available for that image that is the pixel. So we will do our pixel level manipulation in the next session.
     
     
     
    Session 8 - 10:
     
    OpenCV - Drawing Lines, Rectangles, Simple, Concentric Circles, Random Circles
     
    ------------------------------------------------------------------------------
     
    In the coming session, we will draw some shapes, some rectangles, circles, shapes like that we will try to draw on top of our image using OpenCV library.
     
     
     
    Session 11 - 15:
     
    OpenCV Image Transformation - Translation, Rotation, Resizing, Flipping, Cropping
     
    ---------------------------------------------------------------------------------
     
    And then the next session , we will proceed with transformation. Image Transformations
     
    like resizing, flipping, then ... changing the position, cropping, rotating.. stuff like that, we will deal
     
     
     
    Session 16 - 17:
     
    OpenCV Image Arithmetic Operations, Bitwise / Logical Operations
     
    ----------------------------------------------------------------
     
    In the next session and then we will do some arithmetic operations in the image and also we will do some bitwise based operations in the image.
     
     
     
    Session 18: OpenCV - Image Masking
     
    ----------------------------------
     
    Then we have the masking of the image. We will include a Mask, which is our manually created image on top of our natural, normal image. Then we will perform some operations
     
    based on this masking.
     
     
     
    Session 19: Image Color Channels Merging and Splitting
     
    ------------------------------------------------------
     
    And then we will proceed with Image Channels. Basically color image will be having 3 channels, then black and white images will be.. or gray scale images will be having a single channel. So we will merge and split these channels from the given image so that we will have a better understanding about the image channels.
     
     
     
    Session 20: OpenCV - Other Color Spaces - GRAY, HSV, LAB
     
    --------------------------------------------------------
     
    Then we will deal with Color Spaces. The primary color space is RGB, and we will deal with few other kind of  Color Spaces also which is supported by OpenCV.
     
     
     
    Session 21 - 22:
     
    OpenCV - Gray scale Histograms, Color Histograms
     
    ------------------------------------------------
     
    And in the next session, we will deal with Histograms, which is the graphical representation of the intensity of light, or pixels in that image. We will deal with Histograms. We will learn how you can analyze a Historam to tell the nature of that image.
     
     
     
    Session 23: OpenCV - Histogram Equalization
     
    -------------------------------------------
     
    Then we will make use of Histogram Equalization to equalize the image to remove the rough edges of the image to equalize the color, the contrast of the image using the histogram equalizer.
     
     
     
    Session 24 - 25: OpenCV - Image Blurring, Image Threshold
     
    ---------------------------------------------------------
     
    The we will proceed with effects like blurring, then we will do thresholding in which we will be converting the normal image into binary format, like either Black or White, stuff like that... we will be dealing in the Thresholding session.
     
     
     
    Session 26: OpenCV - Image Gradient Detection
     
    ---------------------------------------------
     
    And then we will proceed with Gradient Detection and Edge Detection, which is greatly in use in the Image processing technology world.
     
     
     
    Session 27: OpenCV- Canny Edge Detection
     
    ----------------------------------------
     
    And we will be doing another exercise in  Edge Detection using Canny Edge Detector.
     
     
     
    Session 28: OpenCV - Image Contours
     
    -----------------------------------
     
    Then we will proceed with Contours. Contours are lines drawn across the edge of the image, that is, the outer edge of an image which is also a very useful feature in detecting images inside a large image or a photograph.
     
     
     
    Session 29: Face Detection using OpenCV
     
    ---------------------------------------
     
    And then we will proceed with some Artificial Intelligence based applications like Face Detection That is detecting the number of faces inside a large image.
     
     
     
    Session 30: Face Recognition using Machine Learning
     
    -----------------------------------------------
     
    Then Face Recognition in which, the computer program will recognize the image based on the pre-learned faces.
     
    For example a group of American Senators and our computer is pre-learned with Barack Obama's photo, then the computer will detect that particular face , from that large photograph. We will be using a face recognition library called face_recognition, which is based on Python. We will be using that so that we can easily, quickly implement a Face Detection and Face Recognition program in Python.
     
     
     
    Session 31: Digital Makeup
     
    ---------------------------------------
     
    Using a Technique called Digital Makeup to the face image and make it look more pretty (or scary).
     
    Its done by identifying the getting the selected face landmarks from the list of available face encoding.
     
    Draw shapes like polygons, lines etc over the area of interest and fill it with colors.
     
    Save the image if you want to.
     
     
     
    Session 32: Face Distance Calculation
     
    -------------------------------------------------------
     
    Calculate the numerical value of face match.
     
    Use this value to make decision if the face matches or not and the extend of match obtained.
     
     
     
    Session 32: Real Time Face Recognition using Machine Learning
     
    --------------------------------------------------------------------------------------------
     
    Unlike the previous exercise in which the face recognition was done on a static image, here were are feeding the program with live videos from our computer's web camera.
     
    Then every frame is captured, analysed and then face recognition is done so that the real time video can be detected and recognized for known faces in it.
     
     
     
    Session 33: Optical Character Recognition - OCR using PyTesseract Library
     
    -------------------------------------------------------------------------
     
    Then later on, we will go ahead with Optical Character Recognition, which is also an Artificial Intelligence based application Optical Character Recognition is an old Technology actually. It has recently been improved.  We will be using a library called Tesseract, which is also an OpenCV based library. We will be using that and perform Optical Character Recognition quickly, without having to deal with all the other complexities, since that library makes the Optical Character Recognition very easily to do within your Python program.
     
     
     
    Session 34: Simple Real-time motion detector using OpenCV from Camera Video Stream
     
    ------------------------------------------------------------------------------------------------------------------------------
     
     
     
    Session 35: Object Recognition using pre-trained models
     
    Covering SSD, YOLO and Mask R-CNN
     
    -------------------------------------------------------------------------
     
     
     
    Session 36: Real-time Facial Expression Recognition System from Camera Video Stream
     
    ---------------------------------------------------------------------------------------------------------------------------------
     
     
     
    So overall this is a complete package in which you can learn Computer Vision based Technology, Deep Learning based Face Detection,then Face Recognition and Optical Character Recognition.
     
     
     
    And by the end of this course, we will providing you with a course completion certificate which you can keep with you an mention it in your portfolio so that you will be having more weightage , when you are dealing with jobs based on Computer Vison Technology.
     
     
     
    So without wasting much time, lets dive in to this magical world. See you soon in the class room. Have a great time. Bye Bye
    (0)
    USD150
  • The Ultimate IT and Technology Job Search Guide for Freshers

    As you know over millions of Technology jobs are reported world wide every day and still our fresh young engineers and technology lovers, especially the candidates from third world countries, struggle to get the job they love. Do you ever thought why this happens despite this large demand?

     

    The answer to the above question is that its only because they fail to advertise themselves efficiently. You know few of your lucky friends who got placed in a job placement drive conducted in your college or university during your last semester of study. But if you are not placed then, you know how hard it is to get a job with the fresher label once you are out of your college.     

     

    Let me remind you that the techniques that we are going to discuss are not the conventional global standard for job search. But these are the tricks that I implemented during my difficult period of job search and they proved very successful for me in securing a career.

     

    And in the first session of our course we are discussing about how to overcome this fresher label. We will see how we can decide over a technology domain and how to advertise it rather than sticking on to the fresher label which is not going to do any good for you in your job search.

     

    In the next session, we will see the serious mistakes that freshers make while creating their resume. We will analyse each of those mistakes and then we will proceed with creating a nice and excellent looking resume for our own. We will also get trained our self in the steps you can do to get a resume sample and edit it by yourself using document editors like word and later export it to the universally accepted pdf format. You can get the template downloaded from the resource section of that lecture.

     

    You know.. the most ignored but very important part in a job application email is the covering letter and also the subject line. Because of this, even though your resume looks great, you will end up unnoticed by the companies you apply for job. We will also build a cool and professional looking covering letter and also will see how to write a catchy subject line. That template also you can download from the resource section of that lecture.

     

    There are many interesting ways to find your target recipients, that is, your prospective employers email address. We will get familiar with few of such tips and tricks by which we can build the recipients list and also see how we can send the emails as a batch to reach out as quick as possible.

     

    And I am sure that you will excited when you receive that first positive response from a company for your application. It will be an invitation for an interview. We will see how we can reply politely and courteously to that email and also the format to reschedule the interview just in case if you urgently want it to.

     

    Then comes the actual preparation. We will be focusing on specifically how to prepare for the technical session of the interview. How to gather the probably questions and how to gain from reverse engineering the interview process.

     

    All the resources we use can be downloaded from the resources section of the course. Together, we will get the job that you loved the most. And if you are doing a job which is your passion, your professional life will be very happy and fulfilling. 

    (0)
    USD100
  • Computer Vision & Deep Learning in Python: Novice to Expert
    Hello and welcome to my new course "Computer Vision & Deep Learning in Python: From Novice to Expert"
     
    Making a computer classify an image using Deep Learning and Neural Networks is comparatively easier than it was before. Using all these ready made packages and libraries will few lines of code will make the process feel like a piece of cake. 
     
    Its just like driving a big fancy car with an automatic transmission. You just only have to know how to use the basic controls to drive it. But, if you are a true engineer, you will also be fascinated about the internal working of the engine. In an expert level, you should be able to build your own version of that car from the scratch using the available basic components. Even-though the performance may not match the commercial production line version, the experience knowledge you gain from it cannot be explained in words.
     
    And only because of this we have our course divided into exactly two halves. In the first half we will learn the working concepts of image recognition using computer vision and deep learning and will try to implement the simple versions of popular algorithms and techniques using plain python code. In the next half we will use the popular packages and libraries to implement more complex deep learning image classification models.
     
    Here is a quick list of sessions that are included in this course. 
     
    The first three sessions will be theory sessions in which we will have overview about the concepts of deep learning and neural networks. We will also discuss the basics about a digital image and its composition
     
    Then we will prepare your computer by installing and configuring Anaconda, the free and open-source Python data science platform and the other dependencies to proceed with our exercises.
     
    If you are new to python programming, don't worry. The next four sessions will be covering the basics of python program with simple examples. 
     
    And here comes the aforementioned first half with our own custom code and libraries.
     
    In the coming two theory sessions we will be covering the basics of image classification and the list of datasets that we are planning to cover in this course.
     
    Then we will do a step by step custom implementation of The k-nearest neighbours (KNN) algorithm. It is a simple, easy-to-implement supervised machine learning algorithm that can be used to solve both non-linear classification and regression problems. We will use our own created classes and methods without using any external library. The theory sessions involve learning the KNN basics. Then we will go ahead with downloading the dataset, loading, preprocessing and splitting the data. We will try to train the program and will do an image classification among the three set of animals. Dogs, cats and pandas prediction using our custom KNN implementation.
     
    Now we will proceed with Linear Classification. Starting with the Concept and Theory, we will proceed further with building our own scoring function and also implementing it using plain python code. Later we will discuss about the loss function concepts and also the performance optimization concepts and the terminology associated with it. 
     
    Then will start with the most important optimization algorithm for deep learning which is the Gradient Decent. We will have separate elaborate sessions where we will learn the concept and also implementation using the custom code for Gradient Decent. Later we will proceed with the more advanced Stochastic Gradient Decent with its concepts in the first sessions, later with implementing it using the custom class and methods we created.
     
    We will then look at regularization techniques that can also be used for enhancing the performance and also will implement it with our custom code. 
     
    In the coming sessions, we will have Perceptron, which is a fundamental unit of the neural network which takes weighted inputs, process it and is capable of performing binary classifications. We will discuss the working of the Perceptron Model. Will implement it using Python and also we will try to do some basic prediction exercises using the preceptron we created.
     
    In deep learning, back-propagation is a widely used algorithm in training feed-forward neural networks for supervised learning. We will then have a discussion about the mechanism of backward propagation of errors. Then to implement this concept, we will create our own classes and later implementation projects for a simple binary calculation dataset and also the MNIST optical character recognition dataset.
     
    And with all the knowledge from the pain of making custom implementations. We can now proceed with the second half of deep learning implementation using the libraries and packages that are used for developing commercial Computer Vision Deep Learning programs
     
    We will be using Keras which is an open-source neural-network library written in Python. It is capable of running on top of TensorFlow, Theano and also other languages for creating deep learning applications
     
    At first we will build a simple Neural Network implementation with Keras using the MNIST Optical Character Recognition Dataset. We will train and evaluate this neural network to obtain the accuracy and loss it got during the process.
     
    In deep learning and Computer Vision, a convolutional neural network is a class of deep neural networks, most commonly applied to analysing visual imagery. At first we will have a discussion about the steps and layers in a convolutional neural network. Then we will proceed with creating classes and methods for a custom implementation of Convolutional neural network using the Keras Library which features different filters that we can use for images.
     
    Then we will have a quick discussion about the CNN Design Best Practices and then will go ahead with ShallowNet. The basic and simple CNN architecture. We will create the common class for implementing ShallowNet and later will train and evaluate the ShallowNet model using the popular Animals as well as CIFAR 10 image datasets. Then we will see how we can serialize or save the trained model and then later load it and use it. Even-though a very shallow network, we will try to do prediction for an image we give using shallowNet for both the Animals and CIFAR 10 dataset
     
    After that we will try famous CNN architecture called 'LeNet' for handwritten and machine-printed character recognition. For LeNet also, will create the common class and later will train, evaluate and save the LeNet model using the MNIST dataset. Later we will try to do prediction for a hand written digit image.
     
    Then comes the mighty VGGNet architecture. We will create the common class and later will train, evaluate and save the VGGNet model using the CIFAR-10 dataset. After hours of training, later we will try to do prediction for photos of few common real-life objects falling in the CIFAR-10 categories.
     
    While training deep networks, it is helpful to reduce the learning rate as the number of training epochs increases. We will learn a technique called as Learning Rate Scheduling in our next session and implement it in our python code.
     
    Since we are spending hours to train a model, if we don't checkpoint our training models at the end of a job, there is a great chance that we'll have lost all of our hard earned results! We will see how we can efficiently do that in the coming sessions.
     
    Enough with training using our little computer. Lets go ahead with popular Deep learning models already pre-trained for us which are included in Keras library. They are trained on Imagenet data which is a collection of image data containing 1000 categories of images.
     
    The first pre-trained model that we are dealing with is the VGGNet-16, we will download the already trained model and then do the prediction. Later will go a bit deeper with VGGNet-19 pre-trained model and will do the image classification prediction.
     
    The next pre-trained model that we are using is the ResNet, which can utilize a technique called skip connections, or shortcuts to jump over some layers. We will do the image classification prediction with this network too.
     
    Finally, we will get the Inception and Xception models. Which are convolutional neural networks trained on more than a million images from the ImageNet database. They learn by using Depthwise Separable Convolutions. We will download the weights and do the image classification prediction with this network too.
     
    Overall, this course will be the perfect recipe of custom and ready-made components that you can use for your career in Computer Vision using Deep Learning. 
     
    All the example code and sample images with dataset can be downloaded from the link included in the last session or resource section of this course. 
     
    We will also provide you with a course completion certificate once you are done with all the sessions and it will add great value to your career.
     
    So best wishes and happy learning. See you soon in the class room.
     
    Bibliography & Reference Credits:
     
    * CS231M ・ Stanford University,  CS231N ・ Stanford University
     
    * pyimagesearch blog by Dr. Adrian Rosebrock, PhD. 
     
    * Deep Learning for Computer Vision : Dr. Adrian Rosebrock, PhD. 
     
    * Andrej Karpathy. CS231n: Convolutional Neural Networks for Visual Recognition. 
     
    * AndrejKarpathy.LinearClassification
     
    * Machine Learning is Fun! Adam Geitgey
     
    * Andrew Ng. Machine Learning
     
    * Andrej Karpathy. Optimization
     
    * Karen Simonyan and Andrew Zisserman. "Very Deep Convolutional Networks for Large-
     
    Scale Image Recognition"
     
    Intro Background Video Credits:
     
    * Machine Learning: Living in the Age of AI
    (0)
    USD150
  • Computer vision: OpenCV Fundamentals using Python

    Hi There!

     

    Welcome to my new course OpenCV Fundamentals using Python. This is the first course from my Computer Vision series.

     

    Lets see what are the interesting topics included in this course. At first we will have an overview about computer vision and the amazing OpenCV, the open-source computer vision library.

     

    After that, we are ready to proceed with preparing our computer for installing OpenCV and later will proceed with installing OpenCV itself. Then we will try a one liner code to check if everything is working fine.

     

    When I said this course is for complete beginners, I really mean it. Because even-if you are coming from a non-python background, the next few sessions and examples will help you get the basic python programming skill to proceed with the rest of the sessions. The topics include Python assignment, flow-control, functions and data structures.

     

    Now we are all set to proceed with python computer vision exercises.  But before that we need to learn the theory of how a digital image is organized. Concept of pixels, color and grey scale channels, color codes etc.

     

    Then we will write our first opencv program in which we will simply load and display an image from our computer and we will write a grey scale version of this image back to our computer itself.

     

    As you already know the basic building block of a digital image is pixels, we will use the power of opencv to manipulate the individual pixels of an image and modify it.

     

    Later in the next session, we will use a similar technique to select a collective area of pixels and manipulate it by trying to change color and also get the properties of the image.

     

    Hope you know that there are 3 color channels in a color image and a single one in greyscale image. We will try to separate and extract those color channels and later try to merge them back to form the original image.

     

    Color spaces, unlike the color channels, is the way how colors are organized in an image. In the next session, we will explore the popular color spaces and will do exercises which switches an image between different color spaces.

     

    In the next session, we will use opencv to create and draw simple geometric shapes like line, rectangle, circle, ellipse, polygon etc into an image canvas. We also will try to insert a text into the canvas.

     

    Then we will try some morphological transformations to our image which includes erosion which erodes the pixels, then dilation which will expand the pixels, Opening transformation for white noise removal and closing for black point noise removal. Then gradient transformation and finally the top hat and black hat morphological image transformations.

     

    After that we will try the geometric transformations which includes scaling or resizing the image, then translating or place shifting the image, flipping or changing sides, rotating the image by fixing an axis, and cropping the image to extract the region of interest.

     

    In the coming two sessions, we will try the basic arithmetic and logical operations between two images. We will try to do the addition operation and subtraction operation between two images. We will also try the AND, OR, XOR and NOT binary bitwise operations for two images and will check the results obtained.

     

    Later we will go ahead with Image masking, which is a technique of covering the unwanted areas of image and display only the region of interest.

     

    And after that we will try Image Smoothing techniques. At first we will use our own filter to do a custom smoothing of image and later built in filters using algorithms like Gaussian Smoothing, average smoothing, Median and finally the bilateral smoothing.

     

    Then we will see an advanced technique called thresholding which is very useful in preprocessing and preparing the image for computer vision algorithms. We will do exercises to demonstrate simple thresholding, Otsu thresholding and adaptive thresholding.

     

    Then we will check an interesting image color intensity plotting technique called as the histograms. We will plot a histogram and will learn how we can analyse the histogram to predict the nature of image.

     

    By using this histogram and adjusting the values based on it, we can enhance the contrast of dull looking images. We will explore the technique called histogram equalization.

     

    Image pyramids are different sized images generated and stacked one on top of other. We will explore how we can use opencv methods to generate image pyramids.

     

    For us humans, its an easy task to find an object in a scene and find the edges of it. For computers its not that easy. We will explore the opencv functions which enable us to find the edges using the Canny edge detection.

     

    As we know to a computer, an image is just a collection of numbers. To find the edges, gradients or the pattern of intensity change of colors should be found out. We will use gradient detection function of OpenCV to do that.

     

    Then finally we will draw contours along the different objects in an image with the help of the above mentioned techniques and try to count the number of objects available in the scene.

     

    That's all about the basics. The code and the images used in this course has been uploaded and shared in a folder. I will include the link to download them in the last session or the resource section of this course. You are free to use the code in your projects with no questions asked.

     

    So that's all for now, see you soon in the class room. Happy learning and have a great time.

    (0)
    USD150
  • Computer Vision: Face Recognition Quick Starter in Python

    Hi There!

     

    welcome to my new course 'Face Recognition with Deep Learning using Python'. This is the second course from my Computer Vision series.

     

    Face Detection and Face Recognition is the most used applications of Computer Vision. Using these techniques, the computer will be able to extract one or more faces in an image or video and then compare it with the existing data to identify the people in that image.

     

    Face Detection and Face Recognition is widely used by governments and organizations for surveillance and policing. We are also making use of it daily in many applications like face unlocking of cell phones etc.

     

    This course will be a quick starter for people who wants to dive deep into face recognition using Python without having to deal with all the complexities and mathematics associated with typical Deep Learning process.

     

    We will be using a python library called face-recognition which uses simple classes and methods to get the face recognition implemented with ease. We are also using OpenCV, Dlib and Pillow for python as supporting libraries.

     

    Let's now see the list of interesting topics that are included in this course.

     

    At first we will have an introductory theory session about Face Detection and Face Recognition technology.

     

    After that, we are ready to proceed with preparing our computer for python coding by downloading and installing the anaconda package. Then we will install the rest of dependencies and libraries that we require including the dlib, face-recognition, opencv etc and will try a small program to see if everything is installed fine.

     

    Most of you may not be coming from a python based programming background. The next few sessions and examples will help you get the basic python programming skill to proceed with the sessions included in this course. The topics include Python assignment, flow-control, functions and data structures.

     

    Then we will have an introduction to the basics and working of face detectors which will detect human faces from a given media. We will try the python code to detect the faces from a given image and will extract the faces as separate images.

     

    Then we will go ahead with face detection from a video. We will be streaming the real-time live video from the computer's webcam and will try to detect faces from it. We will draw rectangle around each face detected in the live video.

     

    In the next session, we will customize the face detection program to blur the detected faces dynamically from the webcam video stream.

     

    After that we will try facial expression recognition using pre-trained deep learning model and will identify the facial emotions from the real-time webcam video as well as static images

     

    And then we will try Age and Gender Prediction using pre-trained deep learning model and will identify the  Age and Gender from the real-time webcam video as well as static images

     

    After face detection, we will have an introduction to the basics and working of face recognition which will identify the faces already detected.

     

    In the next session, We will try the python code to identify the names of people and their the faces from a given image and will draw a rectangle around the face with their names on it.

     

    Then, like as we did in face detection we will go ahead with face recognition from a video. We will be streaming the real-time live video from the computer's webcam and will try to identify and name the faces in it. We will draw rectangle around each face detected and beneath that their names in the live video.

     

    Most times during coding, along with the face matching decision, we may need to know how much matching the face is. For that we will get a parameter called face distance which is the magnitude of matching of two faces. We will later convert this face distance value to face matching percentage using simple mathematics.

     

    In the coming two sessions, we will learn how to tweak the face landmark points used for face detection. We will draw line joining these face land mark points so that we can visualize the points in the face which the computer is used for evaluation.

     

    Taking the landmark points customization to the next level, we will use the landmark points to create a custom face make-up for the face image.

     

    That's all about the topics which are currently included in this quick course. The code, images and libraries used in this course has been uploaded and shared in a folder. I will include the link to download them in the last session or the resource section of this course. You are free to use the code in your projects with no questions asked.

     

    Also after completing this course, you will be provided with a course completion certificate which will add value to your portfolio.

     

    So that's all for now, see you soon in the class room. Happy learning and have a great time.

    (0)
    USD150

Students learning on Learnfly works with Fortune 500 companies around the globe.

Sign Up & Start Learning
By signing up, you agree to our Terms of Use and Privacy Policy
Reset Password
Enter your email address and we'll send you a link to reset your password.