Quantum Perceptron Results

Comparison of a classical perceptron with a quantum perceptron for a 2 circle classification problem.
© 2021 JOAB.AI, All Rights Reserved

Results of a simple model problem in binary classification (non-linear) show how the amount of entanglement affects the learning ability of a single quantum perceptron. The model problem involves classifying whether a point is part the inner (lime/green) radius = 0.6 +/- 0.05 circle or outer (cyan/blue) radius = 1 +/- 0.05 circle. Training data used only 100 points, and test data used 200 points. Results shown here compare the classical result to the quantum result after 10 training epochs using the same 100 training points. The classical result quickly converged to less than 1% error, and the quantum result fell within about 3% depending on the initial guess for the randomly chosen weights. Without entanglement among the 6 qubits of the quantum perceptron, the error rate grew to approximately 30% for the same exact test case. Even with just less (diminished) entanglement of the 6 qubits, the error rate grew to 15%. In fact, it appears one possible way the quantum perceptron works is that it uses very slight changes to the relative entanglement between each of the qubits, and so this is why degrading the total amount of entanglement leads to high error rates. Taking a deeper look, it turns out the quantum perceptron used here is really just a classical perceptron with quantum weights. This is somewhat in line with the idea that a form of quantum systems possibly exist (maybe in the dendrites of real neurons) in Nature. See more in our article on the J-Neuron.

For us this was a first real concrete example of quantum machine learning in action on a very simple to understand model problem running in Python with calls using q# to the QDK full-state simulator (provided by Microsoft Azure Quantum).

© 2021 JOAB.AI, All Rights Reserved