Assessing convergence in a multi-armed bandit
Evaluating the performance and convergence of strategies in a multi-armed bandit problem is crucial for understanding their effectiveness. By analyzing how frequently each arm is selected over time, we can infer the learning process and the strategy's ability to identify and exploit the best arm. This exercise involves visualizing the selection percentages of each arm over iterations to assess the convergence of an epsilon-greedy strategy.
The selected_arms array that shows which arm has been pulled in each iteration has been pre-loaded for you.
Diese Übung ist Teil des Kurses
Reinforcement Learning with Gymnasium in Python
Anleitung zur Übung
- Initialize an array
selections_percentagewith zeros, with dimensions to track the selection percentage of each bandit over time. - Get the
selections_percentageover time by calculating the cumulative sum of selections for each bandit over iterations, and dividing by the iteration number. - Plot the cumulative selection percentages for each bandit, to visualize how often each bandit is chosen over iterations.
Interaktive Übung
Vervollständige den Beispielcode, um diese Übung erfolgreich abzuschließen.
# Initialize the selection percentages with zeros
selections_percentage = ____
for i in range(n_iterations):
selections_percentage[i, selected_arms[i]] = 1
# Compute the cumulative selection percentages
selections_percentage = np.____(____, axis=____) / np.arange(1, ____).reshape(-1, 1)
for arm in range(n_bandits):
# Plot the cumulative selection percentage for each arm
plt.plot(____, label=f'Bandit #{arm+1}')
plt.xlabel('Iteration Number')
plt.ylabel('Percentage of Bandit Selections (%)')
plt.legend()
plt.show()
for i, prob in enumerate(true_bandit_probs, 1):
print(f"Bandit #{i} -> {prob:.2f}")