Academic Integrity: tutoring, explanations, and feedback — we don’t complete graded work or submit on a student’s behalf.

Solve the parity-4 problem using minimum number of neurons of the two layers fee

ID: 3733088 • Letter: S

Question

Solve the parity-4 problem using minimum number of neurons of the two layers feedforward neural network. Use bipolar activation function. Parity problem means that output must be +1 if there is odd number of+1s on inputs and output should be -1 when there is even number of+1s on inputs. In the case of parity-4 you have four inputs, one output and 16 possible input patterns. [0 0 0 0]->-I [0 0 0 1]-> +1 [0 0 10] [00 1- [0100] ==> +1 [0 1 0 1]->-1 [0 1 1 0] ==>-l [0 1 1 1] ==> +1 [1000]-> +1 [1001]->-1 [1010] ==>-1 [1011]-> +1 [I 100] ==>-l [1101]-> +1

Explanation / Answer

I am uploading pytorch code, with bipolar activation function instead of a relu.I am using ubuntu 16.04.Find the code below.

---------------------------------------------------------------------------------

import torch
import numpy as np
from torch.autograd import Variable

def biPolarActivation(x):
    return (2/(1+exp(-x)) - 1)

def getData():
    col1 = []
    for i in range(16/2):
        col1.extend([0, 1])
  
    col2 = []
    for i in range(16/2):
        if i%2==0:
            col2.extend([0, 0])
        else:
            col2.extend([1, 1])
  
    col3 = []
    for i in range(16/4):
        if i%2==0:
            col3.extend([0, 0, 0, 0])
        else:
            col3.extend([1, 1, 1, 1])
  
    col4 = []
    for i in range(16):
        if i < 8:
            col4.extend([0])
        else:
            col4.extend([1])
  
    labels = []#np.asarray([])
    data = torch.from_numpy(np.asarray([col1, col2, col3, col4]).T)
    for i in range(16):
        if data[i,:].sum()%2 == 0:
            labels.append(-1)
        else:
            labels.append(1)
    labels = torch.from_numpy(np.asarray(labels).T)
    return Variable(data), Variable(labels)


class DNN(torch.nn.Module):
    def __init__(self):
        super(DNN, self).__init__()
        self.linear1 = torch.nn.Linear(4, 16)
        self.linear2 = torch.nn.Linear(16, 1)
        self.biPolarAct = biPolarActivation
  
    def forward(self, x):
        h_relu = self.biPolarAct(self.linear1(x))
        pred = self.linear2(h_relu)
        return pred

data, labels = getData()
model = DNN()
criterion = torch.nn.MSELoss()
optimzer = torch.optim.SGD(model.parameters(), lr=1e-4)

for t in range(100):
    y_pred = model(data)
    loss = criterion(y_pred, labels)
    print(t, loss.data[0])
  
    optimzer.zero_grad()
    loss.backward()
    optimzer.step()

Hire Me For All Your Tutoring Needs
Integrity-first tutoring: clear explanations, guidance, and feedback.
Drop an Email at
drjack9650@gmail.com
Chat Now And Get Quote