[go: up one dir, main page]

0% found this document useful (0 votes)
10 views1 page

Reinforcement Learning

Reinforcement Learning (RL) trains agents to maximize cumulative rewards through interaction with environments. This document presents a model-free actor-critic algorithm that adapts Deep Q-Learning principles for continuous action spaces, successfully solving over 20 simulated physics tasks. The algorithm achieves competitive performance with planning algorithms and can learn policies directly from raw pixel inputs.

Uploaded by

Rajesh P
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
10 views1 page

Reinforcement Learning

Reinforcement Learning (RL) trains agents to maximize cumulative rewards through interaction with environments. This document presents a model-free actor-critic algorithm that adapts Deep Q-Learning principles for continuous action spaces, successfully solving over 20 simulated physics tasks. The algorithm achieves competitive performance with planning algorithms and can learn policies directly from raw pixel inputs.

Uploaded by

Rajesh P
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 1

Reinforcement Learning (RL) involves training an agent to take actions in an environment to

maximize a cumulative reward signal. The agent interacts with the environment and learns by
receiving feedback in the form of rewards or punishments for its actions. The goal of reinforcement
learning is to find the optimal policy or decision-making strategy that maximizes the long-term
reward.

We adapt the ideas underlying the success of Deep Q-Learning to the continuous action domain. We
present an actor-critic, model-free algorithm based on the deterministic policy gradient that can
operate over continuous action spaces. Using the same learning algorithm, network architecture and
hyper-parameters, our algorithm robustly solves more than 20 simulated physics tasks, including
classic problems such as cartpole swing-up, dexterous manipulation, legged locomotion and car
driving. Our algorithm is able to find policies whose performance is competitive with those found by
a planning algorithm with full access to the dynamics of the domain and its derivatives. We further
demonstrate that for many of the tasks the algorithm can learn policies end-to-end: directly from
raw pixel inputs.

You might also like