Understanding state-value and action-value Bellman equationsReward dependent on (state, action) versus...

I am on the US no-fly list. What can I do in order to be allowed on flights which go through US airspace?

What makes the Forgotten Realms "forgotten"?

Pendulum Rotation

What is the wife of a henpecked husband called?

En Passant For Beginners

How can I improve my fireworks photography?

The vanishing of sum of coefficients: symmetric polynomials

Are there any outlying considerations if I treat donning a shield as an object interaction during the first round of combat?

Can we use the stored gravitational potential energy of a building to produce power?

High pressure canisters of air as gun-less projectiles

How to deal with an incendiary email that was recalled

Why did Bush enact a completely different foreign policy to that which he espoused during the 2000 Presidential election campaign?

Why did Jodrell Bank assist the Soviet Union to collect data from their spacecraft in the mid 1960's?

How to acknowledge an embarrassing job interview, now that I work directly with the interviewer?

Please help me understand the following solution

A starship is travelling at 0.9c and collides with a small rock. Will it leave a clean hole through, or will more happen?

Issues with new Macs: Hardware makes them difficult for me to use. What options might be available in the future?

Does the "particle exchange" operator have any validity?

Unwarranted claim of higher degree of accuracy in zircon geochronology

Rear brake cable temporary fix possible?

Eww, those bytes are gross

Why avoid shared user accounts?

What's a good word to describe a public place that looks like it wouldn't be rough?

Is there a better way to make this?



Understanding state-value and action-value Bellman equations


Reward dependent on (state, action) versus (state, action, successor state)Reinforcement Learning algorithm for Optimized Trade ExecutionWhy are policy gradient methods preferred over value function approximation in continuous action domains?How is that possible that a reward function depends both on the next state and an action from current state?state-action-reward-new state: confusion of termsHow is Importance-Sampling Used in Off-Policy Monte Carlo Prediction?Which one to maximise? Q-value or V-value?Reinforcement learning for continuous state and action spaceEvaluating value functions in RLEvaluating the value of an action in RL













0












$begingroup$


In Reinforcement Learning, the Bellman Optimality equations are important for defining optimal policies to be taken by a learning algorithm. The following two equations are commonly cited...



state
...



and



action
...



From a high level I understand how each work, I get that the state-value function returns the optimal policy from going from one state to another and I get that the action-value returns the optimal policy of taking an action from a particular state. What I don't understand is why these equations work out mathematically.



Why is the max function outside of the equation for the state-action function and why is it placed inside for the action-value function? I must be missing some fundamental information about how each equation works. Can someone explain the difference to me?









share







New contributor




Bolboa is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.







$endgroup$

















    0












    $begingroup$


    In Reinforcement Learning, the Bellman Optimality equations are important for defining optimal policies to be taken by a learning algorithm. The following two equations are commonly cited...



    state
    ...



    and



    action
    ...



    From a high level I understand how each work, I get that the state-value function returns the optimal policy from going from one state to another and I get that the action-value returns the optimal policy of taking an action from a particular state. What I don't understand is why these equations work out mathematically.



    Why is the max function outside of the equation for the state-action function and why is it placed inside for the action-value function? I must be missing some fundamental information about how each equation works. Can someone explain the difference to me?









    share







    New contributor




    Bolboa is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
    Check out our Code of Conduct.







    $endgroup$















      0












      0








      0





      $begingroup$


      In Reinforcement Learning, the Bellman Optimality equations are important for defining optimal policies to be taken by a learning algorithm. The following two equations are commonly cited...



      state
      ...



      and



      action
      ...



      From a high level I understand how each work, I get that the state-value function returns the optimal policy from going from one state to another and I get that the action-value returns the optimal policy of taking an action from a particular state. What I don't understand is why these equations work out mathematically.



      Why is the max function outside of the equation for the state-action function and why is it placed inside for the action-value function? I must be missing some fundamental information about how each equation works. Can someone explain the difference to me?









      share







      New contributor




      Bolboa is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
      Check out our Code of Conduct.







      $endgroup$




      In Reinforcement Learning, the Bellman Optimality equations are important for defining optimal policies to be taken by a learning algorithm. The following two equations are commonly cited...



      state
      ...



      and



      action
      ...



      From a high level I understand how each work, I get that the state-value function returns the optimal policy from going from one state to another and I get that the action-value returns the optimal policy of taking an action from a particular state. What I don't understand is why these equations work out mathematically.



      Why is the max function outside of the equation for the state-action function and why is it placed inside for the action-value function? I must be missing some fundamental information about how each equation works. Can someone explain the difference to me?







      reinforcement-learning markov-process monte-carlo





      share







      New contributor




      Bolboa is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
      Check out our Code of Conduct.










      share







      New contributor




      Bolboa is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
      Check out our Code of Conduct.








      share



      share






      New contributor




      Bolboa is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
      Check out our Code of Conduct.









      asked 4 mins ago









      BolboaBolboa

      1085




      1085




      New contributor




      Bolboa is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
      Check out our Code of Conduct.





      New contributor





      Bolboa is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
      Check out our Code of Conduct.






      Bolboa is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
      Check out our Code of Conduct.






















          0






          active

          oldest

          votes











          Your Answer





          StackExchange.ifUsing("editor", function () {
          return StackExchange.using("mathjaxEditing", function () {
          StackExchange.MarkdownEditor.creationCallbacks.add(function (editor, postfix) {
          StackExchange.mathjaxEditing.prepareWmdForMathJax(editor, postfix, [["$", "$"], ["\\(","\\)"]]);
          });
          });
          }, "mathjax-editing");

          StackExchange.ready(function() {
          var channelOptions = {
          tags: "".split(" "),
          id: "557"
          };
          initTagRenderer("".split(" "), "".split(" "), channelOptions);

          StackExchange.using("externalEditor", function() {
          // Have to fire editor after snippets, if snippets enabled
          if (StackExchange.settings.snippets.snippetsEnabled) {
          StackExchange.using("snippets", function() {
          createEditor();
          });
          }
          else {
          createEditor();
          }
          });

          function createEditor() {
          StackExchange.prepareEditor({
          heartbeatType: 'answer',
          autoActivateHeartbeat: false,
          convertImagesToLinks: false,
          noModals: true,
          showLowRepImageUploadWarning: true,
          reputationToPostImages: null,
          bindNavPrevention: true,
          postfix: "",
          imageUploader: {
          brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
          contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
          allowUrls: true
          },
          onDemand: true,
          discardSelector: ".discard-answer"
          ,immediatelyShowMarkdownHelp:true
          });


          }
          });






          Bolboa is a new contributor. Be nice, and check out our Code of Conduct.










          draft saved

          draft discarded


















          StackExchange.ready(
          function () {
          StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fdatascience.stackexchange.com%2fquestions%2f46563%2funderstanding-state-value-and-action-value-bellman-equations%23new-answer', 'question_page');
          }
          );

          Post as a guest















          Required, but never shown

























          0






          active

          oldest

          votes








          0






          active

          oldest

          votes









          active

          oldest

          votes






          active

          oldest

          votes








          Bolboa is a new contributor. Be nice, and check out our Code of Conduct.










          draft saved

          draft discarded


















          Bolboa is a new contributor. Be nice, and check out our Code of Conduct.













          Bolboa is a new contributor. Be nice, and check out our Code of Conduct.












          Bolboa is a new contributor. Be nice, and check out our Code of Conduct.
















          Thanks for contributing an answer to Data Science Stack Exchange!


          • Please be sure to answer the question. Provide details and share your research!

          But avoid



          • Asking for help, clarification, or responding to other answers.

          • Making statements based on opinion; back them up with references or personal experience.


          Use MathJax to format equations. MathJax reference.


          To learn more, see our tips on writing great answers.




          draft saved


          draft discarded














          StackExchange.ready(
          function () {
          StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fdatascience.stackexchange.com%2fquestions%2f46563%2funderstanding-state-value-and-action-value-bellman-equations%23new-answer', 'question_page');
          }
          );

          Post as a guest















          Required, but never shown





















































          Required, but never shown














          Required, but never shown












          Required, but never shown







          Required, but never shown

































          Required, but never shown














          Required, but never shown












          Required, but never shown







          Required, but never shown







          Popular posts from this blog

          Fairchild Swearingen Metro Inhaltsverzeichnis Geschichte | Innenausstattung | Nutzung | Zwischenfälle...

          Pilgersdorf Inhaltsverzeichnis Geografie | Geschichte | Bevölkerungsentwicklung | Politik | Kultur...

          Marineschifffahrtleitung Inhaltsverzeichnis Geschichte | Heutige Organisation der NATO | Nationale und...