Problem of finding LPS of a string can be converted into finding Longest Common Subsequence of two strings. In this, one string will be original one and the second will be reverse of the original string.
The Longest Common Subsequence problem is like the pattern matching problem, except that you are allowed to skip characters in the text. Also, the goal is to return just one match, which is as long as possible.
LCS can be solved in $O(n^2)$ using Recursion and Memoization.
There exists a slightly faster algorithm discovered by Masek and Paterson of time complexity $O(n^2/\lg n)$.
Paper link: Masek and Paterson
Two other algorithms presented by Hirschberg to compute LCS of two strings $A$ (size $n$) and $B$ (size $m$). Based on the assumption that the symbols that may appear in these strings come from some alphabet of size $t$ (that is actually true in most of the cases). So symbols can be stored in memory using $\log(t)$ bits, which will fit in one word of memory. two symbols can be compared in $O(1)$ time.
Number of different in string $B$ is denoted by $s$, which is of-course less than both $m$ and $t$.
This one requires $O(pn + n\lg n)$ time where $p$ is the length of LCS. This is used when length of LCS is expected to be small. When we solve this the problem using Dynamic Programming then we encounter that most of the entries in the matrix are same, so we can use the idea of Sparse Dynamic Programming.
This algorithm requires $O(p(m+1-p)\log n)$ time. This is very efficient when length of LCS is close to $m$, in that case it will be close to $O(n \lg n)$.
Detailed procedures and algorithms are explained in the Hirschberg's paper.
An another good algorithm is proposed by Sohel Rahman that runs in $O(R \log\log n)$ time, where $R$ is the total number of ordered pairs of positions at which to strings match. It is not applicable when $R$ is the order of $O(n^2)$, but there are many cases when $R$ is the order of $n$. This one uses the concept RMQ (Range Maximum Query).
Paper link: Rahman