I am running on Redhat and years ago when I had my data files with the index info split out I could do a "grep" of the data portion. Now when I try to quickly scan for something I get "Binary file X.LDF matches"
Any suggestions? I know I can use ODBC connectors and I do, but just looking for a quick utility like I used to have.
Thanks for any suggestions.
#grepidx8strings -a filename | grep pattern
There's also the --text option to (GNU) grep, which forces it to treat the file as text, but that's likely to spew a bunch of control characters to the terminal, since the file is not, in fact, text. Using strings is a better bet.
I am running on Redhat and years ago when I had my data files with the index info split out I could do a "grep" of the data portion. Now when I try to quickly scan for something I get "Binary file X.LDF matches"
Any suggestions? I know I can use ODBC connectors and I do, but just looking for a quick utility like I used to have.
Thanks for any suggestions.
#grepidx8I should note I'm assuming here the text data in your file is ASCII or UTF-8. If it's some other character encoding, "strings" won't recognize it. (Though it's possible GNU strings has support for other encodings; I haven't checked.)
I am running on Redhat and years ago when I had my data files with the index info split out I could do a "grep" of the data portion. Now when I try to quickly scan for something I get "Binary file X.LDF matches"
Any suggestions? I know I can use ODBC connectors and I do, but just looking for a quick utility like I used to have.
Thanks for any suggestions.
#grepidx8Thanks Michael - I think strings will do the trick for me