I think its more of a fad than a reality for many. Web 2.0 aims at giving the user a very friendly reception and stay without compromising/exaggerating on the ambience and other critical elements in a webpage.
But lately, I have found some web developers throwing up unacceptable pages on the name of 'Web 2.0'. They use flash banners where a simple static image could do. They use javascript to render page content where simple HTMl tables would do, but they won't use AJAX for form submission/input validation(huh?). Isn't this a blunder knowing that SE spiders cannot go well with Javascript. Only I know what toll it takes on my bandwidth and page load time! Some webmasters are trying more and more to make a page 'attractive' at expense of visitors. Tell me if this would be called as 'more usable'
My message here is web 2.0 is about making web pages more usable, not more avoidable. Websites grow more on value offered rather than anything else.
-Rohan Shenoy
Tuesday, January 22, 2008
Saturday, January 19, 2008
A great browser plugin for Firefox/ Flock and Seamonkey : Specially useful for web developers
A great browser plugin for web developers. Change images, CSS, HTML, cookies, forms, resolutions, etc!
All functionalities of this would come handy!
Download and screenshots:
http://chrispederick.com/work/web-developer/
All functionalities of this would come handy!
Download and screenshots:
http://chrispederick.com/work/web-developer/
Thursday, January 17, 2008
Print links in printable version using javascript
With screen media, one can easily link pages on the web(Eg:Click here to visit Google . But when a user 'prints' the page, all your links will probably be lost or atleast won't print as you want them. To avoid this, it would be nice if we could somehow bundle the links with the printed matter. That is what I recently wanted for my website and I found that javascript could easily help me.
To see what I mean, see the below screenshot. It is a printable version of one the pages of my website.
The logic is to extract hyperlinks from the body of print matter and list them seperately near the footer. To see a live example, click the "Print this page" link in bottom-right of this page.
You would be shown as printer friendly format of that page with outbound links from the body-matter extracted and put in a seperate area.
So how do we do that?Its simple 3 steps!
To see what I mean, see the below screenshot. It is a printable version of one the pages of my website.
The logic is to extract hyperlinks from the body of print matter and list them seperately near the footer. To see a live example, click the "Print this page" link in bottom-right of this page.
You would be shown as printer friendly format of that page with outbound links from the body-matter extracted and put in a seperate area.
So how do we do that?Its simple 3 steps!
- Use javascript function to getElementsByTagName. Since we need to get elements with "a" (anchor) tag, we collect all hyperlinks on by document.getElementsByTagName('a'). This will return an array of all hyperlinks in the matter. Lets call this array as links.
- Calculate size of the above obtained array using links.length function.
- Now lets run a loop that will execute it as many times as the no. of hyperlinks. In every loop, we perform three actions: 1. Get HREF attribute of a hyperlink. 2. Get TITLE attribute of the hyperlink and 3. We print the hyperlink's title and HREF attribute.
var links=document.getElementsByTagName("a"); // returns an array of all hyperlinksTo get this method working, you need to have the TITLE attribut for every link in your matter. If you don't have, then you may decide to skip the title and only print the HREF attribute.
var no=links.length;// Calculates no of hyperlinks
if(no>0) // execute further action only if atleast 1 hyperlink is found in the matter
{
document.write("Important links from this page have been given below. Please visit them:
")
for(i=0;iWe start the loop
{
href=document.links[i].href;//We obtained the HREF attribute of a hyperlink
var title=document.links[i].title;//We obtained the TITLE attribute of the same hyperlink
document.write(""+title+":
=>"+href+"
");} //Now we write the link
}
else
{
false;// return false if o hyperlink is found in the matter
}
Changing structure of your website and hyperlink anchors? Here is what you can do to avoid loss of traffic
For many webmasters, HTML is the first thing they learn when they want to start a website. On a fine day, he is introduced to the superior powers of PHP and other programming languages. Managing a PHP-MySQL powered website is very easy compared to those ever static HTML webpages(ofcourse only if you know PHP at very least)
So when you throw your energies for an upgrade to PHP powered webpages, you realize that this means changing file names. So it is possible that www.yoursite.com/this-is-a-webpage.html is no longer available or is better available at www.yoursite.com/this-is-a-webpage.php.
While it may be easy for you to cope with filename changes, it isn't the case with your visitors who come to your site following a hyperlink from some other webpage, or bookmarks, or search engines, etc. You may change the links on your webpages to suit the new filenames but it would be difficult to manage if you have published your links on many other external webpages and search engines. So your visitors would end up seeing Error 404: File not found and get irritated, and needles to say, they would leave you site!
To solve this, follow these simple 4 steps:
1. Once the upgrade is complete, use permanent redirects(301) to send visitors and crawlers to new location when they request the old location of the file. To confirm that redirection is working, simply type in the old location of the file and you will find it redirecting to the new location.
2. Generate a new sitemap. Sitemaps will be usually be available in multiple formats but lets use XML and HTML formats for this tutorial.
3. Submit the XML sitemap to Google(I assume you are using Google Webmaster Tools).
4. Edit your custom Error 404 (file not found) page and replace it with the new sitemap in HTML format. You could add a small note reading out "Dear visitor, we recentely upgraded and have renamed files in the process. The file you requested could not be found. We have listed all the webpages on this website. Please choose the page you were looking for."
Now, you can sit back and admire your upgraded website without fear of losing any traffic.
So when you throw your energies for an upgrade to PHP powered webpages, you realize that this means changing file names. So it is possible that www.yoursite.com/this-is-a-webpage.html is no longer available or is better available at www.yoursite.com/this-is-a-webpage.php.
While it may be easy for you to cope with filename changes, it isn't the case with your visitors who come to your site following a hyperlink from some other webpage, or bookmarks, or search engines, etc. You may change the links on your webpages to suit the new filenames but it would be difficult to manage if you have published your links on many other external webpages and search engines. So your visitors would end up seeing Error 404: File not found and get irritated, and needles to say, they would leave you site!
To solve this, follow these simple 4 steps:
1. Once the upgrade is complete, use permanent redirects(301) to send visitors and crawlers to new location when they request the old location of the file. To confirm that redirection is working, simply type in the old location of the file and you will find it redirecting to the new location.
2. Generate a new sitemap. Sitemaps will be usually be available in multiple formats but lets use XML and HTML formats for this tutorial.
3. Submit the XML sitemap to Google(I assume you are using Google Webmaster Tools).
4. Edit your custom Error 404 (file not found) page and replace it with the new sitemap in HTML format. You could add a small note reading out "Dear visitor, we recentely upgraded and have renamed files in the process. The file you requested could not be found. We have listed all the webpages on this website. Please choose the page you were looking for."
Now, you can sit back and admire your upgraded website without fear of losing any traffic.
Legal documents for a website
Anybody who manages a website will probably know how easy it is to be sued for a minor error. This could very well mean a big hole in your pocket and lots of bad publicity for you. Even if the jury rules in your favour and sets you free, still I am sure your legal aid may not be so lenient. You as a webmaster won't intentionally guide your visitors wrongly but none know when a typo may occur change the meaning of the content. So is there anything you can do about it? Apart from thorough proof-reading and appropriate research, web publishers can safeguard their positions, revenue and publicity by including disclaimer, terms of use, privacy policy, etc. documents on their websites. If you aren't able to create those from scratch, you can find free legal documents template from Website-Law.co.uk.
As an additional exercise, you can study the legal documents of popular websites such as Youtube or observe ongoing and past legal discussions as the DigitalPoint forums.
As an additional exercise, you can study the legal documents of popular websites such as Youtube or observe ongoing and past legal discussions as the DigitalPoint forums.
Using PHP function include() and SEO
It is very convenient to use the PHP include() function when creating websites. Some extra-conscious webmasters, like me would think twice before using that function especially when it comes to Search Engine Optimisation(SEO).
When I asked people and referrred to real-life examples, all my worries were relieved when I learnt about it. The crawler or the spider is pretty much comparable with a general visitor on your website except that it has some extended capacities in certain areas and can't enjoy the javascript on your pages! So the crawler will see the text content as you see it. If are interested in seeing how a what a crawler sees in your webpage, you can download a text-only browser such as Elinks and visit your pages.
Almost all of the forums, CMS, blogs make atleast few instance of use of this function and they are doing good. So be relaxed and use them!
However, if you plan to include files hosted on website other than yours, you may need some inputs from an article at FAQTs and PHP.net(official site, read the warnings in red background carefully on that site)
When I asked people and referrred to real-life examples, all my worries were relieved when I learnt about it. The crawler or the spider is pretty much comparable with a general visitor on your website except that it has some extended capacities in certain areas and can't enjoy the javascript on your pages! So the crawler will see the text content as you see it. If are interested in seeing how a what a crawler sees in your webpage, you can download a text-only browser such as Elinks and visit your pages.
Almost all of the forums, CMS, blogs make atleast few instance of use of this function and they are doing good. So be relaxed and use them!
However, if you plan to include files hosted on website other than yours, you may need some inputs from an article at FAQTs and PHP.net(official site, read the warnings in red background carefully on that site)
Supplied argument is not a valid MySQL result resource
When I was learning PHP-MySQL, this may one of the most common errors I encountered. I know it feels very bad when you enthusiastically run a query and the server return the error "supplied argument is not a valid MySQL result resource". For rechecking the code I realised that the query was not constructed properly.
Incorrect syntax is the most common offender for newbies. Possibly error lies somewhere in:
It will tell where you went wrong without having to spend time searching the internet.
Incorrect syntax is the most common offender for newbies. Possibly error lies somewhere in:
- You did not spell SELECT correctly. While typing rapidly, it is very common to misplace an E somewhere.
- You did not include the asterik (*). It Should be "SELECT * FROM table WHERE..."
- You declared the variable containing the query wrongly or did not call it properly. Eg: declaring $query to contain the variable and calling mysql_query to process some other variable like mysql_query($qurey).
- You did not connect to the server properly. Either you supplied incorrect login details or chose the incorrect database/table.
It will tell where you went wrong without having to spend time searching the internet.
An inbound link from Wikipedia: Is it still valuable in SEO ?
Anybody webmaster who knows the competition in Google's or Yahoo!'s search rankings knows the importance of inbound links. SEO experts have always been stressing on getting inbound links pointing to your site. While this does work, there is a difference is how the inbound links are weighed. For eg: 100 inbounds links from small sites may not be worth even close to one inbound link from a Yahoo!, CNN, BBC, or any other 'high quality' website for that matter. But what about 'Wikipedia' ?
Majority of the users trust Wikipedia as a source of 'reliable' information. As most of you know, the content of Wikipedia is user-editable. This means you could visit a page and edit it. When webmasters learn this, they are tempted to have inbound links from Wikipedia. They would create a shot article with their keyword in the title of the article and link to their websites either in the body of article or 'External links and references'. If this was not enough, it meant that your competitor could edit that article you created, and change the links to point them towards his site. Now that would be a very nasty game and result in a dreadful experience for millions who come daily to Wikipedia to get reliable information.
A simple attribute in hyperlinks, called as "rel" when set to "nofollow" does the needful. To see it, simply visit Wikipedia and view the HTML source of the page. Try to find the hyperlinks and observe that they have a rel="nofollow" attribute. This prevents the crawlers from attaching any significance to the link.
However, if you wish you still create a inbound link from Wikipedia, do it only if you have something of value to offer to your visitors. They might follow the link and visit your site. Good luck!
Majority of the users trust Wikipedia as a source of 'reliable' information. As most of you know, the content of Wikipedia is user-editable. This means you could visit a page and edit it. When webmasters learn this, they are tempted to have inbound links from Wikipedia. They would create a shot article with their keyword in the title of the article and link to their websites either in the body of article or 'External links and references'. If this was not enough, it meant that your competitor could edit that article you created, and change the links to point them towards his site. Now that would be a very nasty game and result in a dreadful experience for millions who come daily to Wikipedia to get reliable information.
A simple attribute in hyperlinks, called as "rel" when set to "nofollow" does the needful. To see it, simply visit Wikipedia and view the HTML source of the page. Try to find the hyperlinks and observe that they have a rel="nofollow" attribute. This prevents the crawlers from attaching any significance to the link.
However, if you wish you still create a inbound link from Wikipedia, do it only if you have something of value to offer to your visitors. They might follow the link and visit your site. Good luck!
Subscribe to:
Posts (Atom)