[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: Task2



Hi Donovan,
 Okay now it is clearer to me. Since you are checking all the expanding to
all the links that are present on a page pointing to a page or pointed by
a page, it effectively means that some of these pages wont be present in
the crawled list since my crawler did not crawl the entire ASU Web. Hence
hashids is bound to return a null for the case where such links that were
not crawled come in. 

If no one else is getting the problem hmmm then they need to start doing
the Task 2 soon ;-)

Ullas

"Well Begun is Half Done"

On Tue, 27 Mar 2001, Donovan wrote:

> I thought you wrote the CSE494pgRank package.  The code is in the source of
> the LinkExtract class in the
> Links method.  hashids is the hash table that is created by LinkExtract.. the
> HashedLinks file is the one provided in the jar file.  I did some more
> research on HashTable class.  If there is no match for the object passed in
> it returns a null string.  It SHOULDN'T return a null string as the url that
> crashes was gotten from the Citation method.  Anyway the Links method doesn't
> check to see if the list string returned is null from the call to
> hashids.get(fileName).  I can keep the Links methode from crashing the
> program here is by checking to see if the string is null and returning an
> empty array list.  The only way to do this is in the Links method.  My
> program doesn't crash anymore but there is the problem of why this could
> happen in the first place..  If a URL (or file name) is gotten from the
> Citations method then it should be able to be gotten from the Links method as
> well right?  So why does it return null?? that's what I want to know.  Kinda
> upsetting...  I'm surprised that no one else is getting this problem.......
>