Oh, and Luke? Pretty nice showing for a one-person unpaid hobby aggregator, mate
Thanks Phil :-)
I'm actually not that surprised SharpReader managed to get all these tests right; what does surprise me is that 9 out of 11 aggregators don't...
Nice work Luke ;)
good to see something conforming to standards these days
Indeed; SharpReader is still the best I've tried. :)
The only two gripes I have with it are that it doesn't like file:// links (i.e. you can't subscribe to or even view a feed if the file is local rather than served) and it takes some time to shut down if you have a lot of feeds and keep items for a long time. :) Both very minor -- well done!
Posted by Eric at December 20, 2005 11:30 AMI knew I liked SharpReader, and this is just another reason. Hobby indeed!
Posted by Jack Vinson at December 20, 2005 1:46 PMDonno if this is the right place. I am willing to submit my rss feed xml, I thought I can submit in sharpreader, but I am not able to... can I do this??? Where else can I submit my rss feed xml url?
Posted by gsty at January 6, 2006 12:43 AMDude,
You wrote the best aggregator I have found yet, this shouldn't surprise you at all. Love the sharpreader!
Hey,
Not sure if you mind, but you should really obfuscate your code. I just discompiled your CustomComponents.dll using Lutz Roeder's .NET Reflector, as well as SharpReader. I could just cut and paste SharpReader and compile my own RSS reader, and claim all the credit. Not only that, I could use your nice Hutteman.Components.TreeViewPlus control too. I personally wouldn't, but there are plenty of people who would.
Just use Dotfuscator Community edition (comes with Visual Studio) to obfuscate your code before you compile your installer application, or zip your files up.
Also, in the latest version I installed, it has the PDB files! It's a debug version - that's even worse!
Cheers,
Pinky
Posted by Pinky at January 16, 2006 5:40 PMWould you copy every single method? That's a lot of work? A co-worker used one of the commercial decompilers that is writing project files for recovering some lost code. Even though it might look easy, it takes a lot of extra work since there are so many details going wrong. Beyond that, people that use decompilers for stealing code just stuck. If someone would clone SharpReader by stealing the code he would be on the amateurs and idiots list immediatly, not on the credits list. Reflector is helping me a lot figuring out how APIs work. Abusing it for this... sign, sucks :(
Posted by Anonymous at January 24, 2006 2:41 AMNo, I wouldn't copy the code, but someone might.
I would be more concerned about my custom tools. I can just pick up his DLL, drop it into my own app and use his custom tool that he probably spent hours working on. That's what I would worry about.
But you have missed my point anyway. It is SOOOOO easy to obfuscate code, why not just do it and prevent this shit anyway.
Pinky
Posted by Pinky at January 29, 2006 9:18 PMVery nice product.
I'd like Sharp reader more if it covered internal proxies correctly.
Like many of us, we have internal (which are not proxied) and external sites (which are proxied - our squids). Is there any way to set the proxy check off for internal sites - Or better as a property of the site?
default on subscribe = whatever is set in the "Use a Proxy Server" (proxy options) checkbox.
Additionally I'd like to alias feeds (At Home Slashdot, At Office Slashdot) for the same URL, just to set different properties.
Thanks
SharpReader (SR) is a great tool. I like the way you can organize all your news feeds...
One feature request: As I usa both at work and home I would like ( just throughing this right now) or rather would be helpfull if my list of subscriptions could be held online (at some sharpreader's site or other) so that I don't kave to keep synchronizing my subscription list from at work and at home. This site could (in the future) hold an online reader based on the same subscription list (OPML)
Would like to hear your opinion (probably already came through this possibility)...
In case you find it useful, I would like to contribute in the development of such a tool (add-on to SR).
Miguel Mattos
mmmattos@gmail.com
Porto Alegre,
Brazil
I wish that Sharpreader would be able to:
1) use another 'renderer' than IE for the content area (could Firefox be used in the same way?)
2) use external, user-modifyable, style-sheets for the content area
I also have a bug with SR stopping to update some feeds until I quit/restart the application. This is the reason I am right now migrating to another feedreader, even though I prefer SR in every other aspect. My internet connection has a lots of disconnections, and it seems that SR does not handle these well. :(
Posted by Kristoffer B at March 14, 2006 8:08 AMHello. Sharpreader doesnt use proxy settings for loading pictures. Pleaze fix it.
Posted by Yuri Pakhomov at March 26, 2006 8:32 AMImages are loaded by the embedded internet explorer control - you will need to setup your IE proxy in order to load them.
Posted by Luke Hutteman at March 26, 2006 11:48 PMany plans you have to move existing sharpreader build to .net 2.0. thanks.
Posted by sam at April 1, 2006 2:49 PMSharpreader is great! Could you make it remember the widths of the columns on the right (title, width, author etc) though?
Posted by Chris Percival at April 12, 2006 3:42 AMI like Sharpreader. It is a simple easy-to-use-no-frill RSS aggregator.
Its main drawback is its big footprint on the PC memory.
On Win XP Pro, SP2, with .Net Framework 1.1, with 19 RSS feeds not older than 4 weeks, Mem Usage = 31 MB and VM Size = 27 MB. It is TOO MUCH.
Any plan to optimize the code to reduce the footprint?
Thanks.
Re: footprint ...
If you see that the Internet-Explorer easyly uses more than 20 MB per window, than it doesn't seem TOO MUCH for me. Probably the memory-usage even comes from the embedded Internet-Explorer control?
SharpReader whips the lama's butt!
Here's a couple feature requests I think would kick butt:
1. If it makes the program run (or load) any faster, allow for older feed items to be archived. I hang onto a lot 'cause I suspect I may need to search through it one day, but I wonder if it doesn't slow things down.
2. Have archiving done by either a feed by feed basis or by folder & children.
3. Allow archiving by "oldest X items" or "anything older than mm/dd/yy"
4. Can these archived items be made into an XML file or maybe a browser bookmarks file?
Thanks for your consideration. Naturally, I expect you to implement these things by Monday morning (jk). Have a great weekend!
-Jeepy
Posted by Jeepy at July 14, 2006 12:51 PM