This is a 2D dynamic programming problem. We let dp[i][j] be the length of the longest palindromic subsequence (LPS) in s[i..j]. Obviously we have dp[i][i] = 1. Specially we define dp[i][i-1] = 0.

Then we can write the occurrences. If s[i] = s[j], the LPS in s[i..j] is just the LPS in s[i+1..j-1] surrounded by s[i] and s[j]. Therefore, dp[i][j] = dp[i+1][j-1] + 2. If s[i] != s[j], we can only depend on the values we have already calculated: dp[i][j] = max(dp[i+1][j], dp[i][j-1]).

In the 2D array dp, an element might depend on the elements above it, to its right, or to its upper right. Therefore, we can fill out the array diagonally. This approach has time and space complexity equal to the size of the dp array, which is $O(n^2)$. But since any element only depends on elements near it, we can get rid of the elements that are no longer useful and reduce the space complexity to $O(n)$ (code not shown).

class Solution:
def longestPalindromeSubseq(self, s: str) -> int:
n = len(s)
dp = [[0 for _ in range(n)] for _ in range(n)]
for i in range(n):
dp[i][i] = 1
for w in range(2, n + 1): # window size
for i in range(n - w + 1):
j = i + w - 1
if s[i] == s[j]:
dp[i][j] = dp[i+1][j-1] + 2
else:
dp[i][j] = max(dp[i+1][j], dp[i][j-1])
return dp[len(s) - 1]