Largest Provider of Commercial Smalltalk
Cincom is one of the largest commercial providers of Smalltalk, with twice as many customers and partners as other commercial providers.

Tom Nies

Get Started

Engineering Perspective: How Can Smalltalk Embrace Symbolic AI?

How Can Smalltalk Embrace Symbolic AI?

Submitted by a Cincom Smalltalk™ engineer:

Python has become the favorite tool these days for AI researchers because of its fundamental support for scientific programming and its ability to exploit multiple GPUs for mathematical calculations. It’s really good at the “gritty stuff,” in particular, training deep neural networks on massive data sets. That said, to a Smalltalker like me, the Python language is a bit clunky, and it’s not what comes to mind when thinking about traditional AI, which is now referred to as Symbolic AI. Whatever happened to that kind of AI anyway?

In the 1970s, Marvin Minsky (one of AI’s founders) was a kingmaker of AI researchers. Then came the 80s along with Prolog and other expert systems packages. This was soon followed by a resurgence of neural networks (having faded during the 70s) thanks to back propagation, which allowed multi-layer networks and new solutions to previously vexing problems. Against these new approaches, traditional AI lost traction through the 90s; a trend that continues to this day. Now we have Machine Learning, with Deep CNNs (convolutional neural network) and Reinforcement Learning dominating the news and conquering all sorts of technical problems from genetic research and self-driving cars to mastering Go. And, most researchers are using what’s available: Python. So how far will this new AI get?

Perhaps if Symbolic AI is “top-down,” neural networks are “bottom-up.” There is an active debate about their limitations. An article by Ben Dickson https://bdtechtalks.com/2020/03/04/gary-marcus-hybrid-ai describes the debate, and he points to Gary Marcus’ article https://arxiv.org/abs/2002.06177v3 that insists Symbolic AI needs to make a comeback. His ideas convinced me. Minsky rides again! The point of all this is that Smalltalk can take us there. Sure, Python is good at working with the raw grit of arrayed number processing, but Smalltalk has the elegance, grace and power that can embrace Symbolic AI. I can hear it now, “a Smalltalk AI stack!”

And that leads us back from this vision thing down to the practical. SmalltyPy is a means to exploit what Python is good at, in service to what we can do with Smalltalk including elegant Symbolic AI! Okay, we still need to prove that part, but read Marcus’ article for perspective. And, SmalltyPy covers more than “just AI.” After all, there are plenty of other things that Python is being used for; all of which can be exploited from Cincom Smalltalk using SmalltyPy. Take a look at all the tests in PythonExamples. For example, I ran the >>testPerformanceComparison with the following speed results on the Transcript:

factorial performance [ms]: Smalltalk: 154 PythonConnect: 97 Python: 18

SmalltyPy has broad capabilities for creating Python code on the fly, sending messages to Python objects, fetching results and calling back into Smalltalk. But it’s early yet, and my project is just a light demo. Okay, now on to the project details.

My Python script uses Keras, a layer above Tensorflow. Anyone who has used TF or Keras has probably confronted some vexing python package configuration issues, since there are so many independent packages and versions, with complicated interwoven dependencies. It’s not like you can just “load a package” using Cincom® ObjectStudio® or Cincom® VisualWorks®. I used MINICONDA to create a Python environment that supports Keras under Python 3.7. The fact that I was able to get things working quickly is a testament to SmalltyPy’s utility.

After creating an environment in MINICONDA and installing Keras, I was ready to start VisualWorks. The following is the shell script that works for me, although some of it may be unnecessary, especially for simpler applications.

**** From an ANACONDA POWERSHELL Prompt (Miniconda3), run as Administrator ****

conda activate neural

$ENV:PATH="C:\Users\My Name\.conda\envs\neural\Scripts;C:\Users\My Name\.conda\envs\neural;"+$ENV:PATH

$ENV:PYTHONHOME="C:\Users\My Name\.conda\envs\neural"

$ENV:PYTHONPATH="C:\Users\My Name\.conda\envs\neural"

cd C:\Development\CincomBuilds\VisualWorks9.0_sep19.4\image

..\bin\win64\visual.exe python3_64.im

I began with an image set up to access the Cincom Public Repository and loaded the package ‘All SmalltyPy’.

  1. From the main menu, select System/Load Parcels Named,
select: StoreForPostgreSQL

and click OK.
  1. From the main menu, select Store/Connect to Repository… and use the following settings for the Cincom Public Store:
Interface: PostgresSocketConnection

Environment: store.cincomsmalltalk.com:5432_store_public

User Name: guest

Password: guest

Table Owner: BERN
  1. From the main menu, select Store/Published Items.
Enter 'All SmaltyPy', select version, (9.0.1 - 3, heeg), and load it.

During loading, select the suggested versions of any prerequisites.

Once that’s loaded, try some examples to ensure that the interface is working. For example, try running the test, PythonExamples>>testSimpleMatplotlib. Since my “neural” environment doesn’t have matplotlib installed yet, I get an error running this test: ModuleNotFoundError(“No module named ‘matplotlib'”). This is easily fixed by exiting, installing matplotlib and restarting. But I do have tensorflow installed, so I ran >>testBasicTensorflowUsageForTF2. Note the difference in times between the two runs!

Start: #testBasicTensorflowUsageForTF2

Finished: #testBasicTensorflowUsageForTF2 in: 19.123 seconds

Start: #testBasicTensorflowUsageForTF2

Finished: #testBasicTensorflowUsageForTF2 in: 7 milliseconds

It turns out that tensorflow.keras has fewer kinks than Keras alone, which imports Tensorflow, but it kept trying to write to a None stream when I ran it from VisualWorks.

Heeg offers these instructions for using the SmalltyPy debug system:

  1. In the Smalltalk-Debugger inspect ‘errInfo’ 
  2. In the Inspector inspect ‘last_traceback’
  3. This is the top of the Stack
  4. In ‘tb_frame’ you find the first stack-frame with useful data. 
    1. You can see the source-file in which the error was thrown.
    2. In tb_lineno, you find the line number in which the error occurred. This will help to pin the error.
  5. Inspect the variable ‘tb_next’ to open the next frame.
  6. Doing this recursively, you can walk down the stack. At last you will find the source-file and the line where the basic error was thrown.
  7. Now you can open the final source-file and look at what really happened.

Following these instructions, I was able to alter my Keras script to conform to tensorflow.keras requirements. In particular, there as an annoying problem with model.print_summary(), which appears to still be unsupported. So, I removed that for now. And there were other silly things like layer.name (keras) needs to be layer._name (tf.keras), and rmsprop becomes RMSprop in tf.keras. The most unpleasant problem is a change where tf.keras uses model.fit(), not model.fit_generator(), even though I’m using a generator to enhance the variability of the data. Changing that worked, but exposed an unpleasant message about expecting a Sequence instead of my data arrays. But it seems to have worked anyway.

After these minor nits were cleared, thanks to the debugger facilities, I was ready to run my complete script. The output can be streamed to a file as described in the “/Python/Help/Examples/Installable classes” menu pick. The script below uses the Transcript, although a file is a more permanent choice. Since Keras updates the output stream by overwriting the current line until that training step has completed, my file output had a line for every such update. There is probably a flag for adjusting that, but I sent the output to Transcript instead.

The entire script can be found on the Cincom Public Store in a package named ‘CIFAR10Example’. After loading that:

  • Open a browser on the class named ‘Cifar10Demo.
  • In the Cifar10Demo class comment tab, verify that you have already performed the first three steps found there, then go to Step 4.
  • Run the quickTest, which should complete in less than a minute or so.
  • If that works, proceed to train your network using Step 5.

Note that neural networks can easily take five minutes for each epoch, so it could be hours before you finish training 50 epochs, and often 100+ epochs are beneficial.

Additional Information

The Georg Heeg eK Company, a Cincom partner, recently released SmalltyPy, which is a Python Connect for Cincom VisualWorks 9.0. It exploits the full power of Python from Cincom Smalltalk and is available from the Cincom Public Store as the bundle named ‘All SmalltyPy’. I decided to try it out using Windows 10, with the simple goal of training a 5-layer convolutional neural network on the CIFAR 10 data set of images.