Python Program Introduction: It is possible to calculate the probability of a sp
ID: 3702463 • Letter: P
Question
Python Program
Introduction: It is possible to calculate the probability of a specific result of a random variable that is binomially distributed. The formula ??({?? = ??}) = ???? ?? ??????(?????)is used to calculate the desired probability. The question arises how can a binomially distributed random variable be simulated? Arguably this question motivates considering the Bernoulli trials that make up the binomial experiment. In this project there will be a comparison of theoretical results with the results of simulations.
1. Write a Python program to simulate the specific problem above. In this program import and use the Python random number generator. You, arguably, need to consider the finite number of Bernoulli trials in the problem and the number of successes in the finite number of Bernoulli trial
Explanation / Answer
Introduction:-calculating the probability of random variable that is binomially distributed
down voteA Binomially distributed random variable has two parameters n and p, and can be thought of as the distribution of the number of heads obtained when flipping a biased coin n times, where the probability of getting a head at each flip is p.
(More formally it is a sum of independent Bernoulli random variables with parameter p).
import numpy as np
np.random.seed(10) def sigmoid(u):
return 1/(1+np.exp(-u)) def gibbs_vhv(W, hbias, vbias, x):
f_s = sigmoid(np.dot(x, W) + hbias)
h_sample = np.random.binomial(size=f_s.shape, n=1, p=f_s)
f_u = sigmoid(np.dot(h_sample, W.transpose())+vbias)
v_sample = np.random.binomial(size=f_u.shape, n=1, p=f_u)
return [f_s, h_sample, f_u, v_sample]
def reconstruction_error(f_u, x):
cross_entropy = -np.mean( np.sum( x * np.log(sigmoid(f_u)) + (1 - x) * np.log(1 - sigmoid(f_u)), axis=1))
return cross_entropy X = np.array([[1, 0, 0, 0]])
//**Weight to hidden**//
W = np.array([[-3.85, 10.14, 1.16], [6.69, 2.84, -7.73], [1.37, 10.76, -3.98], [-6.18, -5.89, 8.29]])
hbias = np.array([1.04, -4.48, 2.50])
#<= 3 bias for 3 neuron in hidden vbias = np.array([-6.33, -1.68, -1.25, 3.45])
#<= 4 bias for 4 neuron in inputk = 2 v_sample = X for i in range(k):
[f_s, h_sample, f_u, v_sample] = gibbs_vhv(W, hbias, vbias, v_sample)
start = v_sample if i < 2:
print('f_s:', f_s)
print('h_sample:', h_sample)
print('f_u:', f_u)
print('v_sample:', v_sample)
print(v_sample)
print('iter:', i, ' h:', h_sample, ' x:', v_sample, ' entropy:%.3f'%reconstruction_error(f_u, v_sample))
Related Questions
Navigate
Integrity-first tutoring: explanations and feedback only — we do not complete graded work. Learn more.