Interpreting a linear classifier

Notice that a linear classifier computes the score of a class as a weighted sum of all of its pixel values across all 3 of its color channels. Depending on precisely what values we set for these weights, the function has the capacity to like or dislike (depending on the sign of each weight) certain colors at certain positions in the image. For instance, you can imagine that the “ship” class might be more likely if there is a lot of blue on the sides of an image (which could likely correspond to water). You might expect that the “ship” classifier would then have a lot of positive weights across its blue channel weights (presence of blue increases score of ship), and negative weights in the red/green channels (presence of red/green decreases the score of ship).

Another Interpretation of linear classifiers as template matching

Another interpretation for the weights WW is that each row of WW corresponds to a template (or sometimes also called a prototype) for one of the classes. The score of each class for an image is then obtained by comparing each template with the image using an inner product (or dot product) one by one to find the one that “fits” best. With this terminology, the linear classifier is doing template matching, where the templates are learned. Another way to think of it is that we are still effectively doing Nearest Neighbor, but instead of having thousands of training images we are only using a single image per class (although we will learn it, and it does not necessarily have to be one of the images in the training set), and we use the (negative) inner product as the distance instead of the L1 or L2 distance.

Read more »

Yayoi Kusama: Infinity Mirrors is a celebration of the legendary Japanese artist’s sixty-five-year career and promises to be one of 2017s essential art experiences. Visitors will have the unprecedented opportunity to discover six of Kusama’s captivating Infinity Mirror Rooms alongside a selection of her other key works, including a number of paintings from her most recent series My Eternal Soul that have never been shown in the US. From her radical performances in the 1960s, when she staged underground polka dot “Happenings” on the streets of New York, to her latest Infinity Mirror Room, All the Eternal Love I Have for the Pumpkins, 2016, the Hirshhorn exhibition will showcase Kusama’s full range of talent for the first time in Washington, DC. Don’t miss this unforgettable sensory journey through the mind and legacy of one of the world’s most popular artists.

Read more »

Note: all the content comes from http://cs231n.github.io/convolutional-networks/

A simple ConvNet is a sequence of layers, and every layer of a ConvNet transforms one volume of activations to another through a differentiable function. We use three main types of layers to build ConvNet architectures: Convolutional Layer, Pooling Layer, and Fully-Connected Layer (exactly as seen in regular Neural Networks). We will stack these layers to form a full ConvNet architecture.

  • INPUT [32x32x3] will hold the raw pixel values of the image, in this case an image of width 32, height 32, and with three color channels R,G,B.

  • CONV layer will compute the output of neurons that are connected to local regions in the input, each computing a dot product between their weights and a small region they are connected to in the input volume. This may result in volume such as [32x32x12] if we decided to use 12 filters.

  • RELU layer will apply an elementwise activation function, such as the max(0,x)max(0,x) thresholding at zero. This leaves the size of the volume unchanged ([32x32x12]).

  • POOL layer will perform a downsampling operation along the spatial dimensions (width, height), resulting in volume such as [16x16x12].

  • FC (i.e. fully-connected) layer will compute the class scores, resulting in volume of size [1x1x10], where each of the 10 numbers correspond to a class score, such as among the 10 categories of CIFAR-10. As with ordinary Neural Networks and as the name implies, each neuron in this layer will be connected to all the numbers in the previous volume.

Read more »

When snapshot an instance to get an image, the snapshot will keep the flavor information in the image. Then this image will require the disk size in the flavor for the new instance to be larger than the flavor info in the image. This kind of disk requirement is called “virtual disk”. To shrink the “virtual disk”, the following method has been tested and proves to be useful:

  1. Download the snapshot image from Glance as a raw image

  2. Resize the image size using virt-sparsify

  3. Convert the resized image to the raw format

    qemu-img convert -f qcow2 -O raw ***.qcow2 ***.raw
  4. Check the virtual disk size

    qemu-img info ***.raw
    Read more »

The image from OpenStack Snapshot is usually much bigger than the used disk in the virtual machine. That is because a snapshot is the entire disk, so the snapshot will be much bigger than the original qcow image. How much bigger depends on what flavor you’re using and what changes you’re making to deploy the new kernel before you take the new snapshot.

Virt-sparsify is a tool which can make a virtual machine disk (or any disk image) sparse a.k.a. thin-provisioned. This means that free space within the disk image can be converted back to free space on the host.

Virt-sparsify can locate and sparsify free space in most filesystems (eg. ext2/3/4, btrfs, NTFS, etc.), and also in LVM physical volumes.

Virt-sparsify can also convert between some disk formats, for example converting a raw disk image to a thin-provisioned qcow2 image.

Virt-sparsify can operate on any disk image, not just ones from virtual machines. However if a virtual machine has multiple disks and uses volume management, then virt-sparsify will work but not be very effective (http://bugzilla.redhat.com/887826).

Read more »

RDO provides a lot of images for OpenStack platform, but when uploading CentOS-7 image to Glance and launch an instance from the image, it requires the password to login the instance even with the key pair file. It is very wired, and I did not find the cause.

The solution is that when launching the instance, add the following script to the customization script panel in the Configuration selection.

sed -i 's/PasswordAuthentication no/PasswordAuthentication yes/g' /etc/ssh/sshd_config
systemctl restart sshd
Read more »

OpenStack Networking(neutron) is a pluggable, scalable and API-driven system for managing networks and IP addresses. It enables Network-Connectivity-as-a-Service for other OpenStack services, such as OpenStack Compute. Provides an API for users to define networks and the attachments into them. Has a pluggable architecture that support many popular networking vendors and technologies.

The network node runs the Networking plug-in and several agents that provision tenant networks and provide switching, routing, NAT, and DHCP services. It also handles external Internet connectivity for tenant virtual machine instances.

Juno manual gave the detailed installation steps, but I still met many problems, and even doubted whether the manual is right. Of course, the manual is right, but your physical network may not be the similar with that in the manual. For example, my network node only has two network interface cards, while that in Juno has three cards. To make some adaption according to your own network environment, firstly, the networking architecture of OpenStack should be fully understood. Figure 1 shows the network layout in Juno. Use it as an example to illustrate the network architecture in OpenStack.

Read more »

Ubuntu does not start any firewall in default, so we had better set up some rules for the network in case of any safety issues. The following will make a short introductions about how to set up the iptable for ubuntu.

1. Check whether the iptables is installed

root@ubuntu14:~# whereis iptables
iptables: /sbin/iptables /etc/iptables.rules /usr/share/iptables /usr/share/man/man8/iptables.8.gz

2. Check the iptables rule

root@ubuntu14:~# iptables -L
Chain INPUT (policy ACCEPT)
target prot opt source destination
ACCEPT all -- anywhere anywhere state RELATED,ESTABLISHED
ACCEPT icmp -- anywhere anywhere
ACCEPT all -- anywhere anywhere
Read more »